modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Declan/Breitbart_model_v7 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569522583648739328/5WbH8oes_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560064008631328768/4UdhUjed_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">DocStranding & yata</div>
<div style="text-align: center; font-size: 14px;">@docstranding-yatanew</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from DocStranding & yata.
| Data | DocStranding | yata |
| --- | --- | --- |
| Tweets downloaded | 1535 | 3144 |
| Retweets | 1272 | 511 |
| Short tweets | 36 | 663 |
| Tweets kept | 227 | 1970 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3d74dpwv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @docstranding-yatanew's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zfs7pm29) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zfs7pm29/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/docstranding-yatanew')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/Breitbart_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gogzy/t5-base-finetuned_renre_item1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gogzy/t5-base-finetuned_renre_item1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.5613
- Validation Loss: 6.0177
- Train Rouge1: 9.4862
- Train Rouge2: 6.3745
- Train Rougel: 7.9051
- Train Rougelsum: 9.4862
- Train Gen Len: 19.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 13.9387 | 10.3276 | 7.1429 | 1.6 | 4.7619 | 5.5556 | 19.0 | 0 |
| 12.7511 | 9.4693 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 1 |
| 11.3785 | 8.4321 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 2 |
| 9.9856 | 7.2054 | 8.7302 | 4.8 | 7.1429 | 7.9365 | 19.0 | 3 |
| 8.5613 | 6.0177 | 9.4862 | 6.3745 | 7.9051 | 9.4862 | 19.0 | 4 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/CNN_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | import gradio as gr
def promediado(L_izda, DV_izda, L_dcha, DV_dcha):
L = (L_izda+L_dcha)/2
DV = (DV_izda+DV_dcha)/2
return L, DV
def clasificador(L, DV):
import numpy as np
from joblib import load
file_model = 'E:\\trabajo_pajaros\\marcajes\\model.pkl'
file_scaler = 'E:\\trabajo_pajaros\\marcajes\\scaler.pkl'
model = load(file_model)
scaler = load(file_scaler)
data = np.array([L, DV]).reshape(1, -1)
data_scaled = scaler.transform(data)
pred = model.predict(data_scaled)
sexo = ['Hembra', 'Macho']
return sexo[int(pred)]
def clasificador_completo(L_izda, DV_izda, L_dcha, DV_dcha):
L, DV = promediado(L_izda, DV_izda, L_dcha, DV_dcha)
sexo = clasificador(L, DV)
return sexo |
Declan/CNN_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ## Setup
To use this model please clone the following GitHub repository https://github.com/Janst1000/buntesgelaber
## How this model was trained
This model was trained on https://github.com/bundestag/gesetze. I wrote a simple script that takes all of the text inside of the repository and puts it all into a single text file. Then I trained the model using the HuggingFace tutorial https://huggingface.co/blog/how-to-train |
Declan/CNN_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
- ul2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# UL2-mini-nl8 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-mini-nl8](https://huggingface.co/google/t5-efficient-mini-nl8) architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the second row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Declan/CNN_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- ar
- en
tags:
- translation
license: apache-2.0
datasets:
- twitter
metrics:
- bleu
- sacrebleu
--- |
Declan/CNN_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- autotrain
- text-regression
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- orange6996/autotrain-data-testtextexists
co2_eq_emissions:
emissions: 0.3550338626114656
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1966366048
- CO2 Emissions (in grams): 0.3550
## Validation Metrics
- Loss: 4911.982
- MSE: 4911.981
- MAE: 68.106
- R2: -16.962
- RMSE: 70.086
- Explained Variance: -0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/orange6996/autotrain-testtextexists-1966366048
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("orange6996/autotrain-testtextexists-1966366048", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("orange6996/autotrain-testtextexists-1966366048", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Declan/CNN_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- text-regression
pipeline_tag: regression
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- orange6996/autotrain-data-testtextexists
co2_eq_emissions:
emissions: 0.8382331508369333
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1966366051
- CO2 Emissions (in grams): 0.8382
## Validation Metrics
- Loss: 4927.679
- MSE: 4927.679
- MAE: 68.224
- R2: -17.019
- RMSE: 70.197
- Explained Variance: 0.001
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/orange6996/autotrain-testtextexists-1966366051
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("orange6996/autotrain-testtextexists-1966366051", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("orange6996/autotrain-testtextexists-1966366051", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Declan/CNN_model_v7 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# KitBot DialoGPT Model |
Declan/ChicagoTribune_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- spacy
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_core_news_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8890235772
- name: NER Recall
type: recall
value: 0.8897246148
- name: NER F Score
type: f_score
value: 0.8893739579
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9141711565
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9052411777
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.8892973515
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8171693155
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7275183906
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.6355130835
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.8349007315
---
Turkish medium sized pipeline for TrSpaCy. Components: tok2vec, tagger, morphologizer, lemmatizer, parser, ner
| Feature | Description |
| --- | --- |
| **Name** | `tr_core_news_md` |
| **Version** | `3.4.2` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser`, `ner` |
| **Vectors** | -1 keys, 50000 unique vectors (300 dimensions) |
| **Sources** | [UD Turkish BOUN](https://github.com/UniversalDependencies/UD_Turkish-BOUN) (Türk, Utku; Atmaca, Furkan; Özateş, Şaziye Betül; Berk, Gözde; Bedir, Seyyit Talha; Köksal, Abdullatif; Öztürk Başaran, Balkız; Güngör, Tunga; Özgür, Arzucan)<br />[Turkish Wiki NER dataset](https://github.com/turkish-nlp-suite/NER-datasets/tree/main/Turkish-Wiki-NER-Dataset) (Duygu Altinok, Co-one Istanbul)<br />[PANX/WikiANN](http://hlt.sztaki.hu/resources/hunnerwiki.html) (Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, Heng Ji)<br />[Medium-sized Turkish Floret word vectors (MC4 corpus)](https://huggingface.co/turkish-nlp-suite/tr_vectors_web_md) (Duygu Altinok) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://github.com/turkish-nlp-suite/turkish-spacy-models) |
### Label Scheme
<details>
<summary>View label scheme (1572 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADP`, `ADV`, `ANum`, `ANum_Adj`, `ANum_Ness`, `ANum_Noun`, `ANum_With`, `ANum_Zero`, `Abr`, `Abr_With`, `Adj`, `Adj_Ness`, `Adj_With`, `Adj_Without`, `Adj_Zero`, `Adv`, `Adverb`, `Adverb_Adverb`, `Adverb_Noun`, `Adverb_Zero`, `Conj`, `Conj_Conj`, `DET`, `Demons`, `Demons_Zero`, `Det`, `Det_Zero`, `Dup`, `Interj`, `NAdj`, `NAdj_Aux`, `NAdj_Ness`, `NAdj_Noun`, `NAdj_Rel`, `NAdj_Verb`, `NAdj_With`, `NAdj_Without`, `NAdj_Zero`, `NNum`, `NNum_Rel`, `NNum_Zero`, `NOUN`, `Neg`, `Ness`, `Noun`, `Noun_Ness`, `Noun_Noun`, `Noun_Rel`, `Noun_Since`, `Noun_Verb`, `Noun_With`, `Noun_With_Ness`, `Noun_With_Verb`, `Noun_With_Zero`, `Noun_Without`, `Noun_Zero`, `PCAbl`, `PCAbl_Rel`, `PCAcc`, `PCDat`, `PCDat_Zero`, `PCGen`, `PCIns`, `PCIns_Zero`, `PCNom`, `PCNom_Adj`, `PCNom_Noun`, `PCNom_Zero`, `PRON`, `PUNCT`, `Pers`, `Pers_Ness`, `Pers_Pers`, `Pers_Rel`, `Pers_Zero`, `Postp`, `Prop`, `Prop_Conj`, `Prop_Rel`, `Prop_Since`, `Prop_With`, `Prop_Zero`, `Punc`, `Punc_Noun_Ness`, `Punc_Noun_Rel`, `Quant`, `Quant_Zero`, `Ques`, `Ques_Zero`, `Reflex`, `Reflex_Zero`, `Rel`, `SYM`, `Since`, `Since_Since`, `Verb`, `Verb_Conj`, `Verb_Ness`, `Verb_Noun`, `Verb_Verb`, `Verb_With`, `Verb_Zero`, `With`, `Without`, `Without_Zero`, `Zero` |
| **`morphologizer`** | `NumType=Card\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `POS=PUNCT`, `POS=ADV`, `POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|Person=3`, `POS=DET`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3`, `POS=ADJ`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=3`, `POS=PRON`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Number=Sing\|POS=PROPN\|Person=3`, `POS=VERB\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `POS=INTJ`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Acc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3`, `POS=CCONJ`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADP\|Person=3`, `Case=Dat\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|POS=VERB\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `POS=ADP`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=3`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=1`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Sing\|POS=NOUN\|Person=3`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=ADJ\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Loc\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Loc\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Des,Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Ins\|POS=VERB\|Polarity=Neg`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=ADJ\|Person=2,3\|Polarity=Pos`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Nom\|POS=NOUN\|Polarity=Pos`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Loc\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `POS=VERB\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `POS=AUX`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=NUM\|Person=3`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Mood=Pot\|POS=VERB\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `POS=VERB`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Gen\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Rfl`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Equ\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=NUM\|Person=3`, `Case=Nom\|Number=Sing\|POS=AUX\|Person=3`, `Case=Nom\|Number=Sing\|POS=ADV\|Person=3`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Dat\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=3`, `Case=Ins\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur,Sing\|POS=NOUN\|Person=2,3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=1,3\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Conv`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|POS=ADV\|Polarity=Pos`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=1`, `POS=PROPN`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Loc\|POS=VERB\|Polarity=Pos`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Loc\|Number=Sing\|POS=ADJ\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Aspect=Perf\|Number[psor]=Sing\|POS=AUX\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=2\|Person[psor]=3`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=2`, `POS=VERB\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos\|Tense=Pres`, `Number=Sing\|POS=VERB\|Person=3`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=ADJ\|Person=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Mood=Imp\|POS=VERB\|VerbForm=Conv`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Gen\|Number=Sing\|POS=AUX\|Person=3`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Abl\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Case=Loc\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=2`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Case=Nom\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=2`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `POS=NUM`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur\|POS=PRON\|Person=1,3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=ADJ\|Person=3`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|POS=VERB\|Polarity=Neg`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Abl\|POS=VERB\|Polarity=Pos`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `NumType=Ord\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Loc,Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `POS=SYM`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Number=Plur\|POS=VERB\|Person=1`, `Case=Dat\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=PRON\|Person=1,3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=ADP\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=PROPN\|Person=3\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=PRON\|Person=2,3\|Polarity=Pos\|PronType=Dem`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Evident=Nfh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Abl\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Equ\|Number=Sing\|POS=NUM\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Hab\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Pot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Dat\|Number=Plur\|POS=AUX\|Person=3`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Echo=Rdp\|POS=X`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Abl\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Equ\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Ins\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Ins\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Conv`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=PROPN\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Evident=Nfh\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Conv`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=3`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NUM\|Person=1\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Ins\|POS=VERB\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PROPN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `POS=NOUN\|Polarity=Pos`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADV\|Person=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=ADV\|Person=3\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Imp,Perf\|Mood=Gen\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Abl\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=1\|Person[psor]=2`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=PROPN\|Person=3\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=NUM\|Person=3`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Prog\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Abl\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Mood=Pot\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Hab,Perf\|Mood=Cnd,Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|POS=VERB\|Polarity=Neg\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|Polarity=Neg\|Tense=Past,Pres\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Evident=Nfh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Case=Nom\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADP\|Person=3\|Tense=Pres`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Rfl`, `Case=Acc\|Number=Sing\|POS=ADP\|Person=3`, `Case=Loc,Nom\|Number=Sing\|POS=PRON\|Person=3`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp,Perf\|Mood=Gen\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut,Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `POS=VERB\|Polarity=Pos\|Voice=Rfl`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Number=Sing\|POS=VERB\|Person=1`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=NUM\|Person=3`, `Case=Ins\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=1\|Person[psor]=3\|Tense=Past`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Number=Sing\|POS=ADP\|Person=3`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pqp`, `Aspect=Perf\|Mood=Ind\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3\|Tense=Past`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=ADJ\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Acc\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Pres`, `POS=PROPN\|Polarity=Pos`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Perf\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Dat,Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=3\|Polarity=Pos`, `Case=Loc\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Aspect=Imp\|Case=Dat\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=2,3\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Equ\|Number=Sing\|POS=ADJ\|Person=3`, `Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Neg`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Ins\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Evident=Nfh\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Plur\|POS=ADJ\|Person=3`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=VERB\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Loc\|POS=NOUN\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Gen\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|Tense=Past`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Rcp`, `POS=ADV\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Rcp`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Fut`, `Aspect=Hab\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Reflex=Yes`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Rfl`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=ADP\|Person=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Mood=Imp\|Number=Sing\|POS=ADJ\|Person=2\|Polarity=Pos`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Dat\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2\|Reflex=Yes`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Plur,Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Plur\|POS=ADJ\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=PROPN\|Person=1,3\|Tense=Past`, `Abbr=Yes\|Case=Dat\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=1`, `Evident=Nfh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `POS=SCONJ`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Gen\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=ADP\|Person=3`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `NumType=Dist\|POS=NUM`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PART\|Person=3\|Person[psor]=3`, `POS=ADP\|Polarity=Pos`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Plur\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1,3`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=2\|Voice=Rfl`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Loc,Nom\|Number=Plur,Sing\|POS=NOUN\|Person=2,3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=X\|Person=3`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Gen\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=NUM\|Person=3`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Ins\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=NOUN\|Person=2,3\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Hab\|Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Loc\|POS=VERB\|Polarity=Neg`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Mood=Imp\|Number=Plur,Sing\|POS=ADJ\|Person=2,3\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Prog\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Evident=Nfh\|Mood=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past,Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Evident=Nfh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Gen,Pot\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `NumType=Card\|POS=ADJ`, `Case=Gen,Nom\|Number=Plur,Sing\|POS=PRON\|Person=1,3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|POS=VERB\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=2`, `Case=Nom\|Number=Plur\|POS=NUM\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADP\|Person=1\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Rfl`, `Case=Nom\|Number=Plur,Sing\|POS=ADJ\|Person=2,3`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Voice=Cau`, `Case=Equ\|Number=Plur\|POS=NUM\|Person=3`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Number=Sing\|POS=VERB\|Person=2`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Number=Sing\|POS=ADJ\|Person=1`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADP\|Person=1\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=X\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=1\|Person[psor]=3`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Ind,Nec\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Polarity=Pos\|Tense=Past`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=ADV\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Imp\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab,Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Prog\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Polite=Infm\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Loc\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=2`, `Case=Equ\|Number=Plur\|POS=NOUN\|Person=3`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Rfl`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=PRON\|Person=1,3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Rfl`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=2`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADP\|Person=3\|Person[psor]=2`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|POS=VERB\|Polarity=Neg\|Voice=Pass`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Abl\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Hab\|Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Evident=Nfh\|Mood=Gen\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past,Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADV\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Aspect=Imp\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=2`, `Case=Loc,Nom\|Number=Plur,Sing\|POS=PRON\|Person=1,3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Abl\|Mood=Pot\|POS=VERB\|Polarity=Pos`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Evident=Nfh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Number=Plur\|POS=ADJ\|Person=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab,Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `POS=X`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Rfl`, `Case=Abl\|POS=VERB\|Polarity=Neg`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=DET\|Person=3\|Tense=Past`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=ADP\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Loc\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Loc,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Mood=Des\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Number=Plur\|POS=NOUN\|Person=1`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=1`, `Aspect=Imp\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Equ\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Pass`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Nom\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=3\|Polarity=Pos`, `Number=Plur\|POS=NOUN\|Person=2`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=ADP\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=1,3\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Pot\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Loc,Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Case=Nom\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Abl,Loc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Aspect=Imp\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=2`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Tense=Past`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Pres`, `Aspect=Imp\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=ADP\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Abbr=Yes\|Case=Loc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=2`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Aspect=Hab,Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=1\|Person[psor]=1`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past,Pres\|VerbForm=Part`, `Case=Equ\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=2,3\|Tense=Past`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Case=Loc,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=1`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=1\|Person[psor]=1`, `Aspect=Imp\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Number=Sing\|POS=NOUN\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `POS=ADJ\|Polarity=Neg`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Imp,Perf\|Case=Nom\|Mood=Gen,Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut,Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Evident=Nfh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp,Perf\|Mood=Cnd\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut,Pres`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Mood=Pot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Gen\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Imp\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Hab\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Acc\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:q`, `case`, `cc`, `cc:preconj`, `ccomp`, `clf`, `compound`, `compound:lvc`, `compound:redup`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `flat`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PER`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `TITLE`, `WORK_OF_ART` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 91.42 |
| `POS_ACC` | 90.52 |
| `MORPH_ACC` | 88.93 |
| `LEMMA_ACC` | 81.72 |
| `DEP_UAS` | 72.75 |
| `DEP_LAS` | 63.55 |
| `SENTS_P` | 85.45 |
| `SENTS_R` | 81.61 |
| `SENTS_F` | 83.49 |
| `ENTS_F` | 88.94 |
| `ENTS_P` | 88.90 |
| `ENTS_R` | 88.97 | |
Declan/ChicagoTribune_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- spacy
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8953552753
- name: NER Recall
type: recall
value: 0.8828096567
- name: NER F Score
type: f_score
value: 0.889038209
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9119084416
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9067747055
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.891348845
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.8231760731
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7348022033
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.6372603014
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.8446550816
---
Turkish large sized pipeline for TrSpaCy. Components: tok2vec, tagger, morphologizer, lemmatizer, parser, ner
| Feature | Description |
| --- | --- |
| **Name** | `tr_core_news_lg` |
| **Version** | `3.4.2` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Vectors** | -1 keys, 200000 unique vectors (300 dimensions) |
| **Sources** | [UD Turkish BOUN](https://github.com/UniversalDependencies/UD_Turkish-BOUN) (Türk, Utku; Atmaca, Furkan; Özateş, Şaziye Betül; Berk, Gözde; Bedir, Seyyit Talha; Köksal, Abdullatif; Öztürk Başaran, Balkız; Güngör, Tunga; Özgür, Arzucan)<br />[Turkish Wiki NER dataset](https://github.com/turkish-nlp-suite/NER-datasets/tree/main/Turkish-Wiki-NER-Dataset) (Duygu Altinok, Co-one Istanbul)<br />[PANX/WikiANN](http://hlt.sztaki.hu/resources/hunnerwiki.html) (Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, Heng Ji)<br />[Large-sized Turkish Floret word vectors (MC4 corpus)](https://huggingface.co/turkish-nlp-suite/tr_vectors_web_lg) (Duygu Altinok) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://github.com/turkish-nlp-suite/turkish-spacy-models) |
### Label Scheme
<details>
<summary>View label scheme (1552 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADP`, `ADV`, `ANum`, `ANum_Adj`, `ANum_Ness`, `ANum_Noun`, `ANum_With`, `ANum_Zero`, `Abr`, `Abr_With`, `Adj`, `Adj_Ness`, `Adj_With`, `Adj_Without`, `Adj_Zero`, `Adv`, `Adverb`, `Adverb_Adverb`, `Adverb_Noun`, `Adverb_Zero`, `Conj`, `Conj_Conj`, `DET`, `Demons`, `Demons_Zero`, `Det`, `Det_Zero`, `Dup`, `Interj`, `NAdj`, `NAdj_Aux`, `NAdj_Ness`, `NAdj_Noun`, `NAdj_Rel`, `NAdj_Verb`, `NAdj_With`, `NAdj_Without`, `NAdj_Zero`, `NNum`, `NNum_Rel`, `NNum_Zero`, `NOUN`, `Neg`, `Ness`, `Noun`, `Noun_Ness`, `Noun_Noun`, `Noun_Rel`, `Noun_Since`, `Noun_Verb`, `Noun_With`, `Noun_With_Ness`, `Noun_With_Verb`, `Noun_With_Zero`, `Noun_Without`, `Noun_Zero`, `PCAbl`, `PCAbl_Rel`, `PCAcc`, `PCDat`, `PCDat_Zero`, `PCGen`, `PCIns`, `PCIns_Zero`, `PCNom`, `PCNom_Adj`, `PCNom_Noun`, `PCNom_Zero`, `PRON`, `PUNCT`, `Pers`, `Pers_Ness`, `Pers_Pers`, `Pers_Rel`, `Pers_Zero`, `Postp`, `Prop`, `Prop_Conj`, `Prop_Rel`, `Prop_Since`, `Prop_With`, `Prop_Zero`, `Punc`, `Punc_Noun_Ness`, `Punc_Noun_Rel`, `Quant`, `Quant_Zero`, `Ques`, `Ques_Zero`, `Reflex`, `Reflex_Zero`, `Rel`, `SYM`, `Since`, `Since_Since`, `Verb`, `Verb_Conj`, `Verb_Ness`, `Verb_Noun`, `Verb_Verb`, `Verb_With`, `Verb_Zero`, `With`, `Without`, `Without_Zero`, `Zero` |
| **`morphologizer`** | `NumType=Card\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `POS=PUNCT`, `POS=ADV`, `POS=NOUN`, `Case=Nom\|Number=Sing\|POS=ADJ\|Person=3`, `POS=DET`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3`, `POS=ADJ`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=3`, `POS=PRON`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Number=Sing\|POS=PROPN\|Person=3`, `POS=VERB\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `POS=INTJ`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Acc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3`, `POS=CCONJ`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=ADP\|Person=3`, `Case=Dat\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|POS=VERB\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `POS=ADP`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=3`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=1`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Sing\|POS=NOUN\|Person=3`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=ADJ\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Loc\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=1`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Loc\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Des,Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Ins\|POS=VERB\|Polarity=Neg`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=ADJ\|Person=2,3\|Polarity=Pos`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Nom\|POS=NOUN\|Polarity=Pos`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Loc\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `POS=VERB\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `POS=AUX`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=NUM\|Person=3`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Mood=Pot\|POS=VERB\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `POS=VERB`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Gen\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Rfl`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Equ\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=NUM\|Person=3`, `Case=Nom\|Number=Sing\|POS=AUX\|Person=3`, `Case=Nom\|Number=Sing\|POS=ADV\|Person=3`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Dat\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=3`, `Case=Ins\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur,Sing\|POS=NOUN\|Person=2,3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=1,3\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Conv`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|POS=ADV\|Polarity=Pos`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=1`, `POS=PROPN`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Loc\|POS=VERB\|Polarity=Pos`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Loc\|Number=Sing\|POS=ADJ\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Aspect=Perf\|Number[psor]=Sing\|POS=AUX\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=2\|Person[psor]=3`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=2`, `POS=VERB\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos\|Tense=Pres`, `Number=Sing\|POS=VERB\|Person=3`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=ADJ\|Person=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Mood=Imp\|POS=VERB\|VerbForm=Conv`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Gen\|Number=Sing\|POS=AUX\|Person=3`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Abl\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Case=Loc\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=2`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Case=Nom\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=2`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `POS=NUM`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur\|POS=PRON\|Person=1,3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=ADJ\|Person=3`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|POS=VERB\|Polarity=Neg`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Abl\|POS=VERB\|Polarity=Pos`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `NumType=Ord\|POS=NUM`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Loc,Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `POS=SYM`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Number=Plur\|POS=VERB\|Person=1`, `Case=Dat\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=PRON\|Person=1,3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=ADP\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=PROPN\|Person=3\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|PronType=Ind`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=PRON\|Person=2,3\|Polarity=Pos\|PronType=Dem`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Evident=Nfh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Case=Loc\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Abl\|POS=VERB\|Polarity=Pos\|Voice=Pass`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Equ\|Number=Sing\|POS=NUM\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Hab\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Pot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Dat\|Number=Plur\|POS=AUX\|Person=3`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Echo=Rdp\|POS=X`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Abl\|Number=Plur\|POS=PROPN\|Person=3`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Equ\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Ins\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Ins\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Conv`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=PROPN\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Evident=Nfh\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Conv`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=3`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NUM\|Person=1\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Ins\|POS=VERB\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=PROPN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `POS=NOUN\|Polarity=Pos`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADV\|Person=3\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=ADV\|Person=3\|Tense=Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Imp,Perf\|Mood=Gen\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Abl\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=1\|Person[psor]=2`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=PROPN\|Person=3\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Ins\|Number=Plur\|POS=NUM\|Person=3`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Abl\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Prog\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Abl\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Mood=Pot\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Hab,Perf\|Mood=Cnd,Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|POS=VERB\|Polarity=Neg\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=1`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|Polarity=Neg\|Tense=Past,Pres\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Evident=Nfh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Imp\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Hab\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Case=Nom\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=ADP\|Person=3\|Tense=Pres`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Rfl`, `Case=Acc\|Number=Sing\|POS=ADP\|Person=3`, `Case=Loc,Nom\|Number=Sing\|POS=PRON\|Person=3`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3`, `Case=Nom\|NumType=Card\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp,Perf\|Mood=Gen\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut,Pres`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `POS=VERB\|Polarity=Pos\|Voice=Rfl`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Number=Sing\|POS=VERB\|Person=1`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=NUM\|Person=3`, `Case=Ins\|Number=Plur\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=1\|Person[psor]=3\|Tense=Past`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Number=Sing\|POS=ADP\|Person=3`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|POS=VERB\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pqp`, `Aspect=Perf\|Mood=Ind\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3\|Tense=Past`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=ADJ\|Polarity=Pos`, `Aspect=Imp\|Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Acc\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Pres`, `POS=PROPN\|Polarity=Pos`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Cau`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|POS=ADP\|Person=3`, `Aspect=Perf\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=1`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Dat,Nom\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NUM\|Person=3\|Person[psor]=3\|Tense=Pres`, `Evident=Nfh\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Prog\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=3\|Polarity=Pos`, `Case=Loc\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Aspect=Imp\|Case=Dat\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=2,3\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Equ\|Number=Sing\|POS=ADJ\|Person=3`, `Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Neg`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Ins\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Evident=Nfh\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Prog\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Plur\|POS=ADJ\|Person=3`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Abl\|Number=Plur\|POS=VERB\|Person=3`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Loc\|POS=NOUN\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Gen\|Evident=Fh\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|Tense=Past`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Rcp`, `POS=ADV\|Polarity=Pos`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Rcp`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Fut`, `Aspect=Hab\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Case=Nom\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3\|Reflex=Yes`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Rfl`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=ADP\|Person=3\|Tense=Past`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Mood=Imp\|Number=Sing\|POS=ADJ\|Person=2\|Polarity=Pos`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=1`, `Case=Acc\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Pres`, `Case=Abl\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Dat\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2\|Reflex=Yes`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Plur,Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Gen\|Number=Plur\|POS=ADJ\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=PROPN\|Person=1,3\|Tense=Past`, `Abbr=Yes\|Case=Dat\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=1`, `Evident=Nfh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `POS=SCONJ`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Acc\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Perf\|Case=Gen\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Dat\|Number=Plur\|POS=ADP\|Person=3`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `NumType=Dist\|POS=NUM`, `Case=Ins\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PART\|Person=3\|Person[psor]=3`, `POS=ADP\|Polarity=Pos`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Plur\|POS=PROPN\|Person=3`, `Case=Abl\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1,3`, `Case=Equ\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Perf\|Case=Loc\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=2\|Voice=Rfl`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Conv`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Loc,Nom\|Number=Plur,Sing\|POS=NOUN\|Person=2,3`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=1`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=X\|Person=3`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Acc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Gen\|Mood=Gen\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Person[psor]=1`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Aspect=Prog\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=NUM\|Person=3`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Ins\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Mood=Imp\|Number=Sing\|POS=NOUN\|Person=2,3\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Hab\|Evident=Nfh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Loc\|POS=VERB\|Polarity=Neg`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Mood=Imp\|Number=Plur,Sing\|POS=ADJ\|Person=2,3\|Polarity=Pos`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Aspect=Prog\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1\|Tense=Past`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Evident=Nfh\|Mood=Gen\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past,Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Evident=Nfh\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Gen,Pot\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `NumType=Card\|POS=ADJ`, `Case=Gen,Nom\|Number=Plur,Sing\|POS=PRON\|Person=1,3`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Case=Ins\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Acc\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|Person[psor]=2`, `Case=Ins\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Acc\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Hab\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Case=Ins\|POS=VERB\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=2`, `Case=Nom\|Number=Plur\|POS=NUM\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Person[psor]=3`, `Aspect=Perf\|Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADP\|Person=1\|Tense=Pres`, `Aspect=Hab\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Prog\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Rfl`, `Case=Nom\|Number=Plur,Sing\|POS=ADJ\|Person=2,3`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Imp\|Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Aspect=Hab\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Voice=Cau`, `Case=Equ\|Number=Plur\|POS=NUM\|Person=3`, `Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Number=Sing\|POS=VERB\|Person=2`, `Aspect=Imp\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1`, `Number=Sing\|POS=ADJ\|Person=1`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADP\|Person=1\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=X\|Person=3\|Person[psor]=1`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=1\|Person[psor]=3`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Ind,Nec\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Polarity=Pos\|Tense=Past`, `Mood=Nec\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Number=Sing\|POS=ADV\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Case=Loc\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Imp\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab,Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Hab\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Prog\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Polite=Infm\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=2\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Loc\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person=2`, `Case=Equ\|Number=Plur\|POS=NOUN\|Person=3`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Rfl`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Cnd\|Number=Sing\|POS=PRON\|Person=1,3\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Rfl`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Aspect=Perf\|Case=Acc\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=2`, `Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADP\|Person=3\|Person[psor]=2`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Imp\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut`, `Case=Nom\|POS=VERB\|Polarity=Neg\|Voice=Pass`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Abl\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Hab\|Case=Nom\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Evident=Nfh\|Mood=Gen\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past,Pres`, `Case=Ins\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=3`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Hab\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=ADV\|Person=3\|Tense=Past`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=1\|Person[psor]=1`, `Aspect=Imp\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=ADJ\|Person=1,3\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=2`, `Case=Loc,Nom\|Number=Plur,Sing\|POS=PRON\|Person=1,3`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Dat\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Perf\|Mood=Opt\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Abl\|Mood=Pot\|POS=VERB\|Polarity=Pos`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Evident=Nfh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Prog\|Case=Nom\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Number=Plur\|POS=ADJ\|Person=1`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Hab,Perf\|Mood=Gen\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `POS=X`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres\|VerbForm=Conv`, `Aspect=Hab\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Mood=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Rfl`, `Case=Abl\|POS=VERB\|Polarity=Neg`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=DET\|Person=3\|Tense=Past`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|Number[psor]=Sing\|POS=NOUN\|Person=2,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|Voice=Cau`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=ADP\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Loc\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3\|Tense=Pres`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Cau`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Case=Loc,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=3`, `Evident=Nfh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Nom\|Mood=Cnd\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=3\|Person[psor]=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Mood=Des\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Past`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Number=Plur\|POS=NOUN\|Person=1`, `Case=Nom\|Number=Plur\|POS=ADP\|Person=1`, `Aspect=Imp\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut`, `Case=Dat\|NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Aspect=Prog\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NUM\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person=1`, `Case=Equ\|Number=Sing\|POS=VERB\|Person=3`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Aspect=Perf\|Mood=Opt\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Pass`, `Case=Ins\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Case=Nom\|Mood=Des\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Voice=Cau`, `Aspect=Hab\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Aspect=Imp\|Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Cau`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=3\|Polarity=Pos`, `Number=Plur\|POS=NOUN\|Person=2`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=ADP\|Person=2\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Des\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|Voice=Cau`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=ADJ\|Person=1,3\|Tense=Past`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Mood=Pot\|POS=VERB\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Gen,Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2`, `Case=Loc,Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Hab\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Case=Loc\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Past`, `Case=Nom\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Voice=Cau`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Abl,Loc\|Number=Sing\|POS=NOUN\|Person=3`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Tense=Pres`, `Aspect=Perf\|Case=Nom\|Mood=Gen\|Number=Plur,Sing\|POS=PRON\|Person=3\|Tense=Pres`, `Aspect=Imp\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=2`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|Voice=Cau`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=1`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Perf\|Evident=Fh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|Tense=Past`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|POS=AUX\|Person=1`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|Number=Plur,Sing\|POS=NOUN\|Person=1,3\|Tense=Pres`, `Aspect=Imp\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Case=Abl\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=3\|Polarity=Neg\|Tense=Pres`, `Aspect=Perf\|Evident=Fh\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Dat\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=2`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=2\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=ADP\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Pass`, `Abbr=Yes\|Case=Loc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=2`, `Aspect=Perf\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person=2`, `Aspect=Perf\|Case=Loc\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Vnoun`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Aspect=Hab,Perf\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=1\|Person[psor]=1`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Aspect=Perf\|Mood=Gen\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Past,Pres\|VerbForm=Part`, `Case=Equ\|Number=Sing\|POS=PROPN\|Person=3`, `Aspect=Perf\|Case=Nom\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=2,3\|Tense=Past`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Imp\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Part`, `Case=Loc,Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1`, `Aspect=Hab\|Case=Nom\|Mood=Ind\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos\|Tense=Pres`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person=2\|Person[psor]=1`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Nom\|Mood=Nec\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg`, `Case=Ins\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres`, `Case=Equ\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person=3\|Person[psor]=3`, `Case=Loc\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Evident=Nfh\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Prog\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Aspect=Perf\|Number[psor]=Plur\|POS=VERB\|Person[psor]=3\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Cau`, `Case=Acc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Aspect=Perf\|Case=Loc\|Mood=Gen\|Number=Plur,Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Aspect=Perf\|Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Voice=Cau`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=1`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1\|Person[psor]=3`, `Aspect=Prog\|Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person=1\|Person[psor]=1`, `Aspect=Imp\|Mood=Pot\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=1\|Polarity=Neg`, `Number=Sing\|POS=NOUN\|Person=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person=3\|Person[psor]=3\|Polarity=Pos`, `Mood=Des\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Voice=Pass`, `Aspect=Perf\|Evident=Nfh\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=2`, `Aspect=Hab\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Voice=Pass`, `POS=ADJ\|Polarity=Neg`, `Aspect=Perf\|Mood=Pot\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=2\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Case=Nom\|Mood=Ind\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person=1,3\|Person[psor]=3\|Tense=Pres`, `Aspect=Prog\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Aspect=Imp,Perf\|Case=Nom\|Mood=Gen,Pot\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut,Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|Number[psor]=Plur\|POS=PROPN\|Person=3\|Person[psor]=3`, `Aspect=Perf\|Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Cnd\|Number=Sing\|POS=ADJ\|Person=3\|Tense=Pres`, `Case=Nom\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Evident=Nfh\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Aspect=Imp,Perf\|Mood=Cnd\|Number=Plur,Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut,Pres`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person=3\|Person[psor]=1\|Polarity=Pos`, `Mood=Pot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Cau`, `Aspect=Perf\|Case=Gen\|Mood=Cnd\|Number=Sing\|POS=NOUN\|Person=3\|Tense=Pres`, `Case=Loc\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Voice=Cau`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Voice=Cau`, `Case=Loc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Person[psor]=2`, `Aspect=Imp\|Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Part`, `Aspect=Perf\|Evident=Fh\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past\|Voice=Pass`, `Aspect=Hab\|Evident=Nfh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=ADJ\|Person=3\|Person[psor]=3`, `Case=Nom\|Evident=Nfh\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past`, `Case=Acc\|Number=Sing\|POS=NOUN\|Person=3\|Polarity=Pos`, `Case=Acc\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person=3\|Person[psor]=3\|Polarity=Neg`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:q`, `case`, `cc`, `cc:preconj`, `ccomp`, `clf`, `compound`, `compound:lvc`, `compound:redup`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `flat`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `vocative`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 91.19 |
| `POS_ACC` | 90.68 |
| `MORPH_ACC` | 89.13 |
| `LEMMA_ACC` | 82.32 |
| `DEP_UAS` | 73.48 |
| `DEP_LAS` | 63.73 |
| `SENTS_P` | 87.17 |
| `SENTS_R` | 81.92 |
| `SENTS_F` | 84.47 |
| `ENTS_F` | 88.90 |
| `ENTS_P` | 89.54 |
| `ENTS_R` | 88.28 | |
Declan/ChicagoTribune_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language hi
metrics:
- name: Wer
type: wer
value: 32.09599593667993
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 32.01
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | WER |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 0.1011 | 2.44 | 1000 | 0.3075 | 34.63 |
| 0.0264 | 4.89 | 2000 | 0.3558 | 33.13 |
| 0.0025 | 7.33 | 3000 | 0.4214 | 32.59 |
| 0.0006 | 9.78 | 4000 | 0.4519 | 32.01 |
| 0.0002 | 12.22 | 5000 | 0.4679 | 32.10 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1
- Datasets 2.5.3.dev0
- Tokenizers 0.12.1
|
Declan/ChicagoTribune_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
---
`Broken mirror, shattered mirror, brokenM_style` this style gives a shattered mirror / reflection to prompts.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here |
Declan/FoxNews_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/jldevezas/1667497736714/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1352291023867834370/OcubRjdf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">José Devezas</div>
<div style="text-align: center; font-size: 14px;">@jldevezas</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from José Devezas.
| Data | José Devezas |
| --- | --- |
| Tweets downloaded | 1690 |
| Retweets | 439 |
| Short tweets | 106 |
| Tweets kept | 1145 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27g8vb39/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jldevezas's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16q8rwg7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16q8rwg7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jldevezas')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/FoxNews_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
model-index:
- name: vit_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/FoxNews_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Anishsavla2/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Anishsavla2/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8588
- Validation Loss: 3.6754
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8588 | 3.6754 | 0 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Declan/FoxNews_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | # Projeto Final - Modelos Preditivos Conexionistas
### Nome do aluno
|**Tipo de Projeto**|**Modelo Selecionado**|**Linguagem**|
|--|--|--|
|<br>Deteção de Objetos|YOLOv5|PyTorch|
## Performance
O modelo treinado possui performance de **69%**.
### Output do bloco de treinamento
<details>
<summary>Click to expand!</summary>
```text
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
0/2999 14.1G 0.1176 0.03496 0.04929 227 640: 100% 5/5 [00:08<00:00, 1.65s/it]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:04<00:00, 4.23s/it]
all 79 172 0.00117 0.29 0.00144 0.000293
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
1/2999 13.3G 0.11 0.03478 0.04837 216 640: 100% 5/5 [00:03<00:00, 1.34it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.98s/it]
all 79 172 0.00148 0.36 0.00143 0.000484
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
2/2999 13.3G 0.09838 0.03372 0.04588 189 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.42s/it]
all 79 172 0.00276 0.37 0.00585 0.00135
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
3/2999 13.3G 0.08941 0.03499 0.04303 171 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.42s/it]
all 79 172 0.00324 0.61 0.00878 0.00303
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
4/2999 13.3G 0.08229 0.03798 0.03902 230 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.00479 0.803 0.0192 0.0057
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
5/2999 13.3G 0.07235 0.03762 0.03592 187 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.32s/it]
all 79 172 0.772 0.0641 0.0685 0.0199
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
6/2999 13.3G 0.06836 0.03883 0.03304 227 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.332 0.221 0.0677 0.0184
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
7/2999 13.3G 0.06247 0.03535 0.0311 201 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.57s/it]
all 79 172 0.326 0.266 0.082 0.0217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
8/2999 13.3G 0.05948 0.0349 0.02835 161 640: 100% 5/5 [00:03<00:00, 1.36it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.30s/it]
all 79 172 0.393 0.295 0.175 0.0498
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
9/2999 13.3G 0.05892 0.03628 0.02495 221 640: 100% 5/5 [00:03<00:00, 1.38it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.40s/it]
all 79 172 0.386 0.303 0.138 0.0436
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
10/2999 13.3G 0.05797 0.03046 0.02325 158 640: 100% 5/5 [00:03<00:00, 1.39it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.449 0.376 0.226 0.0926
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
11/2999 13.3G 0.05604 0.03243 0.02248 226 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.23s/it]
all 79 172 0.519 0.326 0.3 0.129
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
12/2999 13.3G 0.05705 0.03044 0.02158 181 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.508 0.342 0.361 0.191
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
13/2999 13.3G 0.05534 0.02701 0.01887 167 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.29s/it]
all 79 172 0.429 0.367 0.242 0.0978
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
14/2999 13.3G 0.05445 0.03095 0.01875 188 640: 100% 5/5 [00:03<00:00, 1.35it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.517 0.495 0.393 0.178
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
15/2999 13.3G 0.05658 0.02785 0.01648 175 640: 100% 5/5 [00:03<00:00, 1.33it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.512 0.479 0.358 0.177
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
16/2999 13.3G 0.05553 0.02625 0.01534 186 640: 100% 5/5 [00:03<00:00, 1.37it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.533 0.464 0.412 0.178
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
17/2999 13.3G 0.0524 0.02705 0.01531 187 640: 100% 5/5 [00:04<00:00, 1.18it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:02<00:00, 2.25s/it]
all 79 172 0.304 0.483 0.299 0.12
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
18/2999 13.3G 0.05295 0.02631 0.01442 162 640: 100% 5/5 [00:04<00:00, 1.01it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.99s/it]
all 79 172 0.649 0.416 0.435 0.203
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
19/2999 13.3G 0.05205 0.027 0.01497 227 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.305 0.518 0.336 0.151
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
20/2999 13.3G 0.05057 0.02601 0.01201 190 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.456 0.594 0.442 0.192
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
21/2999 13.3G 0.0488 0.02679 0.01386 138 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.418 0.586 0.428 0.221
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
22/2999 13.3G 0.04713 0.02576 0.01446 215 640: 100% 5/5 [00:03<00:00, 1.33it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.642 0.477 0.467 0.23
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
23/2999 13.3G 0.04759 0.02555 0.0115 179 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.611 0.474 0.436 0.21
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
24/2999 13.3G 0.0453 0.02547 0.01341 218 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.661 0.485 0.517 0.273
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
25/2999 13.3G 0.04469 0.02657 0.01159 229 640: 100% 5/5 [00:03<00:00, 1.34it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.62 0.42 0.481 0.236
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
26/2999 13.3G 0.04451 0.02416 0.0126 202 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.719 0.431 0.502 0.28
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
27/2999 13.3G 0.04454 0.02421 0.0113 165 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.484 0.424 0.438 0.217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
28/2999 13.3G 0.04353 0.02453 0.01121 222 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.328 0.335 0.307 0.165
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
29/2999 13.3G 0.04318 0.024 0.01177 168 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.399 0.317 0.292 0.141
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
30/2999 13.3G 0.04106 0.0244 0.01042 202 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.654 0.512 0.52 0.29
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
31/2999 13.3G 0.04151 0.02421 0.01037 193 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.73 0.389 0.46 0.254
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
32/2999 13.3G 0.04187 0.02569 0.009244 193 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.372 0.432 0.397 0.184
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
33/2999 13.3G 0.04139 0.02411 0.007808 191 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.657 0.571 0.583 0.354
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
34/2999 13.3G 0.03919 0.02373 0.008649 186 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.719 0.515 0.556 0.273
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
35/2999 13.3G 0.03933 0.02373 0.01062 194 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.646 0.496 0.499 0.297
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
36/2999 13.3G 0.03985 0.02292 0.01068 171 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.597 0.514 0.424 0.212
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
37/2999 13.3G 0.04022 0.02436 0.01181 206 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.468 0.473 0.381 0.199
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
38/2999 13.3G 0.0392 0.02418 0.01042 207 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.589 0.442 0.495 0.25
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
39/2999 13.3G 0.03949 0.0232 0.008525 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.578 0.413 0.467 0.233
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
40/2999 13.3G 0.03951 0.02309 0.00936 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.473 0.597 0.552 0.319
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
41/2999 13.3G 0.03824 0.02332 0.01016 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.46 0.647 0.494 0.284
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
42/2999 13.3G 0.03829 0.02417 0.009787 197 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.289 0.588 0.436 0.211
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
43/2999 13.3G 0.03897 0.02372 0.009366 182 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.272 0.612 0.385 0.217
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
44/2999 13.3G 0.0391 0.02348 0.008347 223 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.621 0.392 0.457 0.238
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
45/2999 13.3G 0.03792 0.02103 0.01101 159 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.543 0.488 0.527 0.293
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
46/2999 13.3G 0.03747 0.02327 0.009737 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.423 0.621 0.509 0.278
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
47/2999 13.3G 0.03701 0.02207 0.008706 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.459 0.505 0.448 0.231
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
48/2999 13.3G 0.03722 0.02309 0.008686 179 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.488 0.637 0.532 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
49/2999 13.3G 0.03637 0.02043 0.007798 179 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.732 0.443 0.491 0.267
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
50/2999 13.3G 0.03709 0.02212 0.007632 194 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.29s/it]
all 79 172 0.468 0.676 0.564 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
51/2999 13.3G 0.03752 0.02221 0.009035 168 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.417 0.667 0.451 0.248
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
52/2999 13.3G 0.03637 0.02205 0.007745 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.602 0.533 0.563 0.305
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
53/2999 13.3G 0.03561 0.02235 0.006919 213 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.71 0.514 0.575 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
54/2999 13.3G 0.0375 0.02151 0.007491 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.724 0.39 0.472 0.246
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
55/2999 13.3G 0.03676 0.02192 0.007115 211 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.617 0.509 0.502 0.308
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
56/2999 13.3G 0.03543 0.02149 0.008343 174 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.599 0.519 0.537 0.302
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
57/2999 13.3G 0.03516 0.02129 0.00804 185 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.09s/it]
all 79 172 0.48 0.465 0.442 0.253
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
58/2999 13.3G 0.03451 0.02335 0.009221 200 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.09s/it]
all 79 172 0.503 0.407 0.386 0.196
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
59/2999 13.3G 0.0356 0.02126 0.006811 248 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.697 0.329 0.407 0.219
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
60/2999 13.3G 0.03437 0.02229 0.007112 226 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.422 0.53 0.456 0.252
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
61/2999 13.3G 0.03398 0.02009 0.007508 209 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.716 0.369 0.501 0.286
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
62/2999 13.3G 0.03399 0.02136 0.007171 189 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.492 0.623 0.51 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
63/2999 13.3G 0.0354 0.02072 0.008472 176 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.49s/it]
all 79 172 0.623 0.603 0.616 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
64/2999 13.3G 0.03459 0.02183 0.008503 187 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.6 0.618 0.642 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
65/2999 13.3G 0.03388 0.02139 0.008551 205 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.614 0.314 0.34 0.18
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
66/2999 13.3G 0.03483 0.02107 0.009369 173 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.494 0.505 0.489 0.257
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
67/2999 13.3G 0.0334 0.0195 0.006718 162 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.608 0.412 0.454 0.246
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
68/2999 13.3G 0.03517 0.02186 0.008161 200 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.691 0.441 0.521 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
69/2999 13.3G 0.03397 0.0213 0.007542 192 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.598 0.403 0.453 0.233
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
70/2999 13.3G 0.03464 0.02079 0.00808 220 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.657 0.415 0.505 0.287
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
71/2999 13.3G 0.03414 0.02142 0.006937 149 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.529 0.476 0.479 0.28
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
72/2999 13.3G 0.03195 0.02103 0.007308 189 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.611 0.424 0.426 0.258
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
73/2999 13.3G 0.03293 0.0218 0.00651 222 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.15it/s]
all 79 172 0.728 0.479 0.542 0.337
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
74/2999 13.3G 0.03236 0.01866 0.009649 127 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.588 0.594 0.595 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
75/2999 13.3G 0.03235 0.01942 0.007454 176 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.713 0.562 0.592 0.334
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
76/2999 13.3G 0.03392 0.02069 0.006954 187 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.753 0.474 0.537 0.31
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
77/2999 13.3G 0.03292 0.02024 0.00708 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.724 0.502 0.523 0.285
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
78/2999 13.3G 0.03178 0.021 0.006592 208 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.724 0.503 0.527 0.304
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
79/2999 13.3G 0.03131 0.01963 0.0057 187 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.703 0.471 0.539 0.329
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
80/2999 13.3G 0.03203 0.02018 0.008287 198 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.77 0.499 0.564 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
81/2999 13.3G 0.03084 0.01961 0.007307 206 640: 100% 5/5 [00:04<00:00, 1.16it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.81s/it]
all 79 172 0.687 0.463 0.535 0.318
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
82/2999 13.3G 0.03089 0.02012 0.006733 202 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.27s/it]
all 79 172 0.597 0.511 0.501 0.287
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
83/2999 13.3G 0.03064 0.01998 0.005996 211 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.18s/it]
all 79 172 0.601 0.418 0.48 0.25
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
84/2999 13.3G 0.03132 0.01948 0.004924 206 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.651 0.478 0.534 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
85/2999 13.3G 0.03003 0.01933 0.006001 216 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.718 0.447 0.572 0.33
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
86/2999 13.3G 0.0322 0.01857 0.006746 204 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.712 0.534 0.57 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
87/2999 13.3G 0.02937 0.0195 0.007804 208 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.667 0.539 0.59 0.377
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
88/2999 13.3G 0.03086 0.02039 0.007138 200 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.628 0.543 0.558 0.323
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
89/2999 13.3G 0.03102 0.01957 0.006189 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.558 0.57 0.476 0.272
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
90/2999 13.3G 0.03026 0.02042 0.008099 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.668 0.57 0.512 0.306
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
91/2999 13.3G 0.02908 0.01987 0.007552 200 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.616 0.483 0.481 0.278
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
92/2999 13.3G 0.03033 0.01963 0.007505 171 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.808 0.519 0.616 0.369
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
93/2999 13.3G 0.03 0.01985 0.007565 192 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.741 0.466 0.533 0.313
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
94/2999 13.3G 0.03061 0.01982 0.006072 164 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.839 0.448 0.552 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
95/2999 13.3G 0.02993 0.01983 0.00618 197 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.783 0.435 0.549 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
96/2999 13.3G 0.0297 0.01942 0.004898 193 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.808 0.534 0.602 0.383
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
97/2999 13.3G 0.03024 0.0192 0.007548 199 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.677 0.545 0.644 0.388
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
98/2999 13.3G 0.02892 0.01992 0.006328 202 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.659 0.531 0.599 0.365
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
99/2999 13.3G 0.02903 0.01783 0.008322 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.598 0.545 0.559 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
100/2999 13.3G 0.03175 0.01939 0.005829 211 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.602 0.477 0.479 0.279
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
101/2999 13.3G 0.02981 0.01811 0.006895 187 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.532 0.483 0.449 0.254
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
102/2999 13.3G 0.02894 0.01893 0.007293 178 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.76 0.362 0.532 0.311
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
103/2999 13.3G 0.02853 0.01932 0.005571 233 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.603 0.514 0.581 0.337
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
104/2999 13.3G 0.02875 0.01752 0.006674 162 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.76 0.454 0.57 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
105/2999 13.3G 0.02874 0.01946 0.006926 211 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.694 0.45 0.506 0.289
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
106/2999 13.3G 0.02967 0.01745 0.005547 205 640: 100% 5/5 [00:04<00:00, 1.22it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.75s/it]
all 79 172 0.748 0.507 0.519 0.296
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
107/2999 13.3G 0.03031 0.01972 0.006291 210 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.18s/it]
all 79 172 0.745 0.489 0.565 0.335
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
108/2999 13.3G 0.02897 0.01927 0.006829 186 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.743 0.543 0.545 0.312
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
109/2999 13.3G 0.03018 0.01939 0.006308 237 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.715 0.591 0.575 0.308
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
110/2999 13.3G 0.02912 0.01956 0.006358 192 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.717 0.545 0.581 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
111/2999 13.3G 0.02963 0.01883 0.007443 157 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.732 0.498 0.617 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
112/2999 13.3G 0.02796 0.01824 0.006296 226 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.67 0.632 0.623 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
113/2999 13.3G 0.02855 0.01817 0.005978 190 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.675 0.574 0.594 0.321
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
114/2999 13.3G 0.02922 0.01838 0.006151 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.782 0.457 0.584 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
115/2999 13.3G 0.02933 0.0188 0.008184 161 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.588 0.567 0.559 0.324
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
116/2999 13.3G 0.02704 0.0186 0.005759 217 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.712 0.594 0.61 0.387
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
117/2999 13.3G 0.02805 0.01756 0.007583 183 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.72 0.483 0.574 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
118/2999 13.3G 0.02756 0.0179 0.006019 190 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.726 0.576 0.603 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
119/2999 13.3G 0.02793 0.01717 0.007643 168 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.735 0.538 0.595 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
120/2999 13.3G 0.0286 0.01874 0.005134 223 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.11s/it]
all 79 172 0.653 0.528 0.551 0.323
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
121/2999 13.3G 0.0283 0.01745 0.005626 189 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.763 0.444 0.544 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
122/2999 13.3G 0.02849 0.01963 0.00636 189 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.728 0.518 0.622 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
123/2999 13.3G 0.02766 0.01739 0.005559 157 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.705 0.387 0.452 0.269
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
124/2999 13.3G 0.02744 0.01842 0.007753 207 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.756 0.553 0.605 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
125/2999 13.3G 0.02833 0.01658 0.005275 144 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.771 0.441 0.507 0.327
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
126/2999 13.3G 0.02873 0.01809 0.006018 230 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.12s/it]
all 79 172 0.777 0.509 0.608 0.33
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
127/2999 13.3G 0.02782 0.01771 0.005374 184 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.736 0.517 0.548 0.345
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
128/2999 13.3G 0.02666 0.01821 0.004101 210 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.638 0.596 0.615 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
129/2999 13.3G 0.02662 0.01685 0.005201 182 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.722 0.597 0.629 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
130/2999 13.3G 0.02622 0.01672 0.006191 144 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.802 0.468 0.542 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
131/2999 13.3G 0.02667 0.01867 0.00618 197 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.609 0.454 0.49 0.301
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
132/2999 13.3G 0.02787 0.01969 0.005775 229 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.525 0.437 0.459 0.266
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
133/2999 13.3G 0.02774 0.01836 0.006047 212 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.13it/s]
all 79 172 0.632 0.52 0.575 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
134/2999 13.3G 0.02741 0.01768 0.00579 219 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.625 0.632 0.585 0.37
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
135/2999 13.3G 0.02713 0.01778 0.005949 217 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.514 0.585 0.452 0.257
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
136/2999 13.3G 0.0277 0.01698 0.007301 162 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.578 0.407 0.444 0.255
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
137/2999 13.3G 0.0272 0.01767 0.004752 179 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.479 0.457 0.483 0.284
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
138/2999 13.3G 0.02749 0.018 0.004356 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.729 0.473 0.532 0.288
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
139/2999 13.3G 0.02768 0.01737 0.006317 188 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.657 0.548 0.533 0.298
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
140/2999 13.3G 0.02608 0.01767 0.00451 184 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.737 0.553 0.586 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
141/2999 13.3G 0.02657 0.01743 0.004523 201 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.51s/it]
all 79 172 0.781 0.515 0.606 0.363
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
142/2999 13.3G 0.02724 0.01774 0.006709 184 640: 100% 5/5 [00:04<00:00, 1.16it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.88s/it]
all 79 172 0.742 0.576 0.637 0.346
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
143/2999 13.3G 0.02575 0.01799 0.005323 212 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.30s/it]
all 79 172 0.776 0.492 0.563 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
144/2999 13.3G 0.02654 0.01726 0.005264 166 640: 100% 5/5 [00:04<00:00, 1.19it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.60s/it]
all 79 172 0.679 0.535 0.565 0.314
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
145/2999 13.3G 0.02687 0.01829 0.005005 250 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.81 0.523 0.563 0.342
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
146/2999 13.3G 0.02672 0.01687 0.005595 208 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.76 0.503 0.556 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
147/2999 13.3G 0.02691 0.01723 0.005911 180 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.08s/it]
all 79 172 0.748 0.537 0.579 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
148/2999 13.3G 0.02626 0.01806 0.004589 227 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.39s/it]
all 79 172 0.795 0.512 0.556 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
149/2999 13.3G 0.02653 0.01662 0.005405 177 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.791 0.464 0.531 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
150/2999 13.3G 0.0274 0.01625 0.006181 147 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.749 0.544 0.602 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
151/2999 13.3G 0.02522 0.01715 0.004715 184 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.801 0.549 0.596 0.363
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
152/2999 13.3G 0.02576 0.01662 0.004771 139 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.825 0.503 0.559 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
153/2999 13.3G 0.02895 0.01797 0.005624 185 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.837 0.465 0.54 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
154/2999 13.3G 0.02476 0.01641 0.005789 184 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.79 0.498 0.564 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
155/2999 13.3G 0.02548 0.01872 0.005374 212 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.824 0.498 0.573 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
156/2999 13.3G 0.02632 0.01815 0.006032 229 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.715 0.549 0.597 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
157/2999 13.3G 0.02511 0.01649 0.005817 195 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.821 0.566 0.663 0.385
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
158/2999 13.3G 0.02516 0.01653 0.005879 159 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.686 0.641 0.643 0.372
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
159/2999 13.3G 0.02657 0.01595 0.005654 185 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.676 0.625 0.652 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
160/2999 13.3G 0.02582 0.0173 0.005202 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.729 0.547 0.621 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
161/2999 13.3G 0.02607 0.01732 0.006912 218 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.679 0.483 0.547 0.322
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
162/2999 13.3G 0.02534 0.01606 0.005221 169 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.799 0.44 0.601 0.379
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
163/2999 13.3G 0.02638 0.01726 0.006002 216 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.689 0.635 0.631 0.381
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
164/2999 13.3G 0.02443 0.01882 0.005191 279 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.721 0.67 0.685 0.404
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
165/2999 13.3G 0.02414 0.01719 0.003583 182 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.67 0.646 0.681 0.409
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
166/2999 13.3G 0.02653 0.01778 0.005552 195 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.796 0.659 0.663 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
167/2999 13.3G 0.02577 0.01602 0.005825 178 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.721 0.69 0.704 0.404
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
168/2999 13.3G 0.02503 0.01867 0.004954 244 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.692 0.658 0.701 0.417
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
169/2999 13.3G 0.02524 0.01849 0.006853 222 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.07s/it]
all 79 172 0.716 0.598 0.644 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
170/2999 13.3G 0.02458 0.017 0.004295 193 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.589 0.633 0.537 0.318
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
171/2999 13.3G 0.02478 0.01661 0.003602 186 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.685 0.599 0.582 0.355
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
172/2999 13.3G 0.02531 0.01569 0.005721 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.687 0.607 0.62 0.367
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
173/2999 13.3G 0.02561 0.0182 0.005804 214 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.769 0.533 0.642 0.393
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
174/2999 13.3G 0.02489 0.01687 0.006483 180 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.647 0.56 0.638 0.39
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
175/2999 13.3G 0.02413 0.01744 0.006103 222 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.722 0.537 0.659 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
176/2999 13.3G 0.02617 0.0165 0.004711 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.06s/it]
all 79 172 0.658 0.547 0.588 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
177/2999 13.3G 0.02391 0.0172 0.006477 160 640: 100% 5/5 [00:04<00:00, 1.19it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.21it/s]
all 79 172 0.722 0.485 0.57 0.329
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
178/2999 13.3G 0.02595 0.0167 0.004114 203 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.75 0.421 0.499 0.285
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
179/2999 13.3G 0.02545 0.01615 0.005344 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.862 0.506 0.609 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
180/2999 13.3G 0.0244 0.01572 0.005259 219 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.848 0.563 0.642 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
181/2999 13.3G 0.02403 0.01656 0.004655 198 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.708 0.582 0.607 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
182/2999 13.3G 0.025 0.01808 0.005477 238 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.756 0.603 0.637 0.389
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
183/2999 13.3G 0.02387 0.01685 0.007013 194 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.75 0.625 0.693 0.435
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
184/2999 13.3G 0.02442 0.01655 0.005348 242 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.19it/s]
all 79 172 0.754 0.529 0.602 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
185/2999 13.3G 0.02413 0.01696 0.0051 175 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.843 0.546 0.663 0.395
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
186/2999 13.3G 0.02388 0.01608 0.003896 203 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.82 0.542 0.656 0.401
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
187/2999 13.3G 0.02426 0.01638 0.005311 166 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.72s/it]
all 79 172 0.802 0.555 0.616 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
188/2999 13.3G 0.02368 0.01607 0.005871 181 640: 100% 5/5 [00:03<00:00, 1.26it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.736 0.628 0.667 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
189/2999 13.3G 0.0257 0.01712 0.006646 205 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.781 0.441 0.596 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
190/2999 13.3G 0.02485 0.01648 0.005049 222 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.76 0.466 0.549 0.304
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
191/2999 13.3G 0.02291 0.01608 0.005364 217 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.696 0.473 0.51 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
192/2999 13.3G 0.02464 0.01737 0.006162 205 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.696 0.493 0.535 0.325
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
193/2999 13.3G 0.02452 0.01706 0.005202 197 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.801 0.429 0.562 0.332
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
194/2999 13.3G 0.02326 0.01667 0.004886 190 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.768 0.432 0.531 0.294
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
195/2999 13.3G 0.02424 0.01685 0.005938 231 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.87 0.384 0.528 0.319
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
196/2999 13.3G 0.02383 0.01643 0.005414 160 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.752 0.584 0.617 0.351
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
197/2999 13.3G 0.02474 0.01629 0.004213 195 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.819 0.516 0.617 0.336
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
198/2999 13.3G 0.02378 0.01605 0.004158 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.691 0.492 0.623 0.353
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
199/2999 13.3G 0.02474 0.01601 0.005006 196 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.845 0.481 0.57 0.349
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
200/2999 13.3G 0.02325 0.01525 0.004579 200 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.679 0.425 0.496 0.301
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
201/2999 13.3G 0.02245 0.01579 0.00427 226 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.72s/it]
all 79 172 0.743 0.428 0.494 0.303
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
202/2999 13.3G 0.02279 0.01541 0.007018 163 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.24s/it]
all 79 172 0.795 0.485 0.547 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
203/2999 13.3G 0.02381 0.01648 0.004034 192 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.695 0.529 0.619 0.37
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
204/2999 13.3G 0.02344 0.01555 0.003905 196 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.34s/it]
all 79 172 0.81 0.49 0.566 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
205/2999 13.3G 0.02414 0.01678 0.005969 225 640: 100% 5/5 [00:04<00:00, 1.21it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.77s/it]
all 79 172 0.793 0.499 0.551 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
206/2999 13.3G 0.02397 0.01629 0.005902 211 640: 100% 5/5 [00:04<00:00, 1.20it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.848 0.569 0.645 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
207/2999 13.3G 0.02395 0.01554 0.005462 170 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.826 0.536 0.643 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
208/2999 13.3G 0.02359 0.0166 0.005498 224 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.747 0.591 0.647 0.398
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
209/2999 13.3G 0.02367 0.01604 0.00558 225 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.747 0.53 0.614 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
210/2999 13.3G 0.02559 0.01545 0.005393 171 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.15it/s]
all 79 172 0.787 0.519 0.618 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
211/2999 13.3G 0.02273 0.0167 0.005695 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.79 0.512 0.612 0.386
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
212/2999 13.3G 0.02254 0.01724 0.005585 208 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.788 0.571 0.639 0.375
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
213/2999 13.3G 0.02419 0.01494 0.005212 207 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.842 0.53 0.648 0.364
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
214/2999 13.3G 0.02568 0.01664 0.004367 183 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.68 0.579 0.58 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
215/2999 13.3G 0.02463 0.01619 0.005758 209 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.69 0.573 0.583 0.335
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
216/2999 13.3G 0.02261 0.01598 0.005402 208 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.798 0.543 0.61 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
217/2999 13.3G 0.02275 0.01476 0.004736 160 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.714 0.539 0.56 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
218/2999 13.3G 0.02411 0.01569 0.004459 187 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.78 0.521 0.564 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
219/2999 13.3G 0.02208 0.01444 0.00422 173 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.627 0.504 0.557 0.353
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
220/2999 13.3G 0.023 0.0164 0.004591 218 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.01s/it]
all 79 172 0.838 0.465 0.597 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
221/2999 13.3G 0.02155 0.01479 0.003508 192 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.00s/it]
all 79 172 0.807 0.548 0.654 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
222/2999 13.3G 0.02316 0.01552 0.004726 217 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.747 0.677 0.707 0.39
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
223/2999 13.3G 0.0233 0.01691 0.007177 180 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.03s/it]
all 79 172 0.742 0.567 0.639 0.372
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
224/2999 13.3G 0.02206 0.01515 0.00455 173 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.76 0.499 0.619 0.368
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
225/2999 13.3G 0.0244 0.01643 0.004599 181 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.84 0.505 0.606 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
226/2999 13.3G 0.02239 0.01505 0.005543 201 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.676 0.555 0.635 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
227/2999 13.3G 0.02369 0.01622 0.005755 177 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.729 0.595 0.608 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
228/2999 13.3G 0.02266 0.01487 0.004909 204 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.665 0.54 0.549 0.345
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
229/2999 13.3G 0.0233 0.01656 0.00652 172 640: 100% 5/5 [00:04<00:00, 1.17it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.10s/it]
all 79 172 0.769 0.492 0.603 0.384
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
230/2999 13.3G 0.02232 0.01574 0.004084 183 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.678 0.568 0.599 0.381
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
231/2999 13.3G 0.0231 0.01623 0.004324 230 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.756 0.594 0.632 0.394
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
232/2999 13.3G 0.02286 0.01546 0.00569 185 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.06it/s]
all 79 172 0.789 0.53 0.608 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
233/2999 13.3G 0.0229 0.01477 0.004437 149 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.683 0.499 0.573 0.338
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
234/2999 13.3G 0.0234 0.01698 0.005284 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.771 0.472 0.544 0.317
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
235/2999 13.3G 0.02219 0.0148 0.004658 186 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.17s/it]
all 79 172 0.717 0.49 0.543 0.333
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
236/2999 13.3G 0.02321 0.0145 0.005254 161 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.73 0.506 0.55 0.328
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
237/2999 13.3G 0.02371 0.01623 0.004812 204 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.693 0.511 0.554 0.343
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
238/2999 13.3G 0.02394 0.01551 0.004886 161 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.791 0.444 0.594 0.376
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
239/2999 13.3G 0.02325 0.0154 0.004177 195 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.00it/s]
all 79 172 0.758 0.629 0.657 0.421
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
240/2999 13.3G 0.02192 0.0154 0.003914 202 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.683 0.556 0.631 0.388
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
241/2999 13.3G 0.02239 0.01488 0.007844 184 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.13it/s]
all 79 172 0.694 0.441 0.561 0.346
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
242/2999 13.3G 0.02268 0.0156 0.005672 179 640: 100% 5/5 [00:03<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.61s/it]
all 79 172 0.841 0.46 0.552 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
243/2999 13.3G 0.02341 0.0154 0.005667 185 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.662 0.475 0.542 0.315
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
244/2999 13.3G 0.02246 0.01587 0.005929 182 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.674 0.509 0.56 0.354
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
245/2999 13.3G 0.02381 0.01481 0.005467 151 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.14it/s]
all 79 172 0.78 0.568 0.636 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
246/2999 13.3G 0.02173 0.01692 0.005114 238 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.639 0.566 0.578 0.352
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
247/2999 13.3G 0.02268 0.01652 0.004531 200 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.623 0.536 0.525 0.321
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
248/2999 13.3G 0.02235 0.01425 0.005001 192 640: 100% 5/5 [00:04<00:00, 1.25it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.697 0.525 0.599 0.386
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
249/2999 13.3G 0.02352 0.01621 0.003642 222 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.08it/s]
all 79 172 0.625 0.49 0.574 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
250/2999 13.3G 0.02184 0.01575 0.00716 221 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.09it/s]
all 79 172 0.657 0.513 0.563 0.364
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
251/2999 13.3G 0.02174 0.01629 0.004422 242 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.668 0.522 0.576 0.378
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
252/2999 13.3G 0.02075 0.01556 0.004782 225 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.04s/it]
all 79 172 0.705 0.523 0.559 0.341
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
253/2999 13.3G 0.02199 0.01595 0.003561 159 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.818 0.495 0.577 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
254/2999 13.3G 0.02302 0.01519 0.005618 225 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.818 0.511 0.561 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
255/2999 13.3G 0.02252 0.01508 0.004516 209 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.77 0.508 0.567 0.356
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
256/2999 13.3G 0.02207 0.01442 0.005011 174 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.763 0.515 0.566 0.366
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
257/2999 13.3G 0.02165 0.01472 0.005958 205 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.16it/s]
all 79 172 0.737 0.488 0.564 0.35
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
258/2999 13.3G 0.02085 0.01448 0.00546 197 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.11it/s]
all 79 172 0.666 0.512 0.608 0.374
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
259/2999 13.3G 0.02247 0.01579 0.004364 179 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.12it/s]
all 79 172 0.845 0.575 0.625 0.382
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
260/2999 13.3G 0.02216 0.01446 0.004768 206 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.717 0.526 0.613 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
261/2999 13.3G 0.02163 0.01531 0.004534 214 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.667 0.577 0.606 0.385
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
262/2999 13.3G 0.02124 0.0156 0.004753 214 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.691 0.504 0.557 0.333
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
263/2999 13.3G 0.02157 0.01402 0.003773 168 640: 100% 5/5 [00:04<00:00, 1.22it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.78s/it]
all 79 172 0.692 0.618 0.621 0.369
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
264/2999 13.3G 0.02115 0.01505 0.004787 219 640: 100% 5/5 [00:04<00:00, 1.14it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.54s/it]
all 79 172 0.703 0.587 0.617 0.358
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
265/2999 13.3G 0.02119 0.01501 0.003825 230 640: 100% 5/5 [00:04<00:00, 1.24it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.22s/it]
all 79 172 0.633 0.557 0.562 0.34
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
266/2999 13.3G 0.02232 0.01488 0.005141 193 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.20it/s]
all 79 172 0.679 0.553 0.55 0.348
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
267/2999 13.3G 0.02236 0.01423 0.003963 213 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.02s/it]
all 79 172 0.658 0.539 0.565 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
268/2999 13.3G 0.02109 0.01543 0.005812 185 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.17it/s]
all 79 172 0.631 0.529 0.567 0.36
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
269/2999 13.3G 0.02206 0.01438 0.004758 192 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.662 0.507 0.577 0.362
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
270/2999 13.3G 0.02145 0.01416 0.006062 183 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.62s/it]
all 79 172 0.808 0.489 0.598 0.377
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
271/2999 13.3G 0.02128 0.01363 0.004508 210 640: 100% 5/5 [00:04<00:00, 1.23it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.14s/it]
all 79 172 0.788 0.498 0.592 0.373
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
272/2999 13.3G 0.02323 0.01416 0.004713 181 640: 100% 5/5 [00:03<00:00, 1.28it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.02it/s]
all 79 172 0.68 0.556 0.591 0.383
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
273/2999 13.3G 0.02241 0.01433 0.005521 175 640: 100% 5/5 [00:03<00:00, 1.32it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.10it/s]
all 79 172 0.72 0.539 0.587 0.375
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
274/2999 13.3G 0.02156 0.01502 0.005296 187 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.07it/s]
all 79 172 0.7 0.516 0.578 0.371
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
275/2999 13.3G 0.02187 0.01516 0.004791 177 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.15s/it]
all 79 172 0.676 0.566 0.574 0.361
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
276/2999 13.3G 0.02218 0.01589 0.004767 229 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.03it/s]
all 79 172 0.699 0.514 0.59 0.365
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
277/2999 13.3G 0.02195 0.01649 0.004888 205 640: 100% 5/5 [00:03<00:00, 1.30it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.05it/s]
all 79 172 0.737 0.607 0.652 0.416
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
278/2999 13.3G 0.02141 0.01452 0.003921 173 640: 100% 5/5 [00:03<00:00, 1.27it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.01it/s]
all 79 172 0.657 0.603 0.639 0.408
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
279/2999 13.3G 0.02054 0.01555 0.004726 229 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.05s/it]
all 79 172 0.778 0.508 0.665 0.41
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
280/2999 13.3G 0.02179 0.01484 0.004448 188 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.793 0.552 0.646 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
281/2999 13.3G 0.0219 0.01377 0.006125 195 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.13s/it]
all 79 172 0.744 0.546 0.6 0.392
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
282/2999 13.3G 0.02197 0.01625 0.004896 215 640: 100% 5/5 [00:03<00:00, 1.31it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:00<00:00, 1.04it/s]
all 79 172 0.678 0.521 0.56 0.347
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
283/2999 13.3G 0.02083 0.01468 0.005276 143 640: 100% 5/5 [00:03<00:00, 1.29it/s]
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.52s/it]
all 79 172 0.634 0.553 0.573 0.343
Stopping training early as no improvement observed in last 100 epochs. Best results observed at epoch 183, best model saved as best.pt.
To update EarlyStopping(patience=100) pass a new patience value, i.e. `python train.py --patience 300` or use `--patience 0` to disable EarlyStopping.
284 epochs completed in 0.433 hours.
Optimizer stripped from runs/train/exp/weights/last.pt, 14.5MB
Optimizer stripped from runs/train/exp/weights/best.pt, 14.5MB
Validating runs/train/exp/weights/best.pt...
Fusing layers...
Model summary: 157 layers, 7020913 parameters, 0 gradients, 15.8 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100% 1/1 [00:01<00:00, 1.20s/it]
all 79 172 0.75 0.624 0.693 0.434
cadeira 79 78 0.714 0.462 0.527 0.255
geladeira 79 4 0.763 0.816 0.895 0.64
monitor 79 36 0.735 0.463 0.572 0.338
quadro 79 54 0.788 0.756 0.78 0.505
```
</details>
### Evidências do treinamento

## Roboflow
https://app.roboflow.com/wilsoncesarschool/projetofinalmodelosconexionistas/1
## HuggingFace
https://huggingface.co/wilsonsob/projetoFinal
|
Declan/NPR_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/deltazulu14/1667501296205/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569374676933033984/NSveEXrv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Delta Zulu</div>
<div style="text-align: center; font-size: 14px;">@deltazulu14</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Delta Zulu.
| Data | Delta Zulu |
| --- | --- |
| Tweets downloaded | 881 |
| Retweets | 108 |
| Short tweets | 150 |
| Tweets kept | 623 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8h87mrlb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deltazulu14's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mwjzatl4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deltazulu14')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 1307 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Declan/NewYorkTimes_model_v1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-classifier-feedback-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-classifier-feedback-qa
This model is a fine-tuned version of [TTian/bert-mlm-feedback](https://huggingface.co/TTian/bert-mlm-feedback) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/NewYorkTimes_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-ecb-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ecb-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9655 | 1.0 | 17714 | 0.9472 |
| 0.9121 | 2.0 | 35428 | 0.8986 |
| 0.8682 | 3.0 | 53142 | 0.8705 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/NewYorkTimes_model_v4 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: platzi-beans-beit-model-eduardo-ag
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-beans-beit-model-eduardo-ag
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3782
- Accuracy: 0.8496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6894 | 3.85 | 500 | 0.3782 | 0.8496 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/NewYorkTimes_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-state
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-state
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline_123
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7661674609
- name: NER Recall
type: recall
value: 0.8052226793
- name: NER F Score
type: f_score
value: 0.7852097323
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline_123` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DESCRIPTION`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 78.52 |
| `ENTS_P` | 76.62 |
| `ENTS_R` | 80.52 |
| `TRANSFORMER_LOSS` | 1811559.14 |
| `NER_LOSS` | 6345113.13 | |
Declan/Politico_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- beeple
- digital art
- aiart
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/riccardogiorato/beeple-diffusion/resolve/main/assets/robots.png" width="512" height="512"/></center>

# Beeple Diffusion
An AI model that generates artwork with [beeple](https://twitter.com/beeple) style!
Based of a finetuned Stable Diffusion V1.5, trained in Dreambooth with more than 600 images of Beeple's artwork.
by [riccardogiorato](https://twitter.com/riccardogiorato)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "riccardogiorato/beeple-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a magical witch with golden hair with beeple style"
image = pipe(prompt).images[0]
image.save("./magical_witch.png")
```
# **👇Model👇**
AI Model Weights available at huggingface: https://huggingface.co/riccardogiorato/beeple-diffusion
# Usage
After model loaded, use keyword **beeple** in your prompt. |
Declan/Politico_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: smalldata-pysentimiento-robertuito-eng-only-sentiment-single-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smalldata-pysentimiento-robertuito-eng-only-sentiment-single-finetuned-memes
This model is a fine-tuned version of [jayantapaul888/twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3693
- Accuracy: 0.8533
- Precision: 0.8686
- Recall: 0.8673
- F1: 0.8678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 378 | 0.3505 | 0.8466 | 0.8687 | 0.8600 | 0.8608 |
| 0.4239 | 2.0 | 756 | 0.3369 | 0.8570 | 0.8725 | 0.8700 | 0.8707 |
| 0.325 | 3.0 | 1134 | 0.3286 | 0.8533 | 0.8700 | 0.8675 | 0.8677 |
| 0.277 | 4.0 | 1512 | 0.3472 | 0.8533 | 0.8681 | 0.8680 | 0.8678 |
| 0.277 | 5.0 | 1890 | 0.3538 | 0.8593 | 0.8736 | 0.8732 | 0.8734 |
| 0.2438 | 6.0 | 2268 | 0.3693 | 0.8533 | 0.8686 | 0.8673 | 0.8678 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/Reuters_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: geevegeorge/customdbv6
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# customdbmodelv6
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `geevegeorge/customdbv6` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- gradient_accumulation_steps: 8
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/customdbmodelv6/tensorboard?#scalars)
|
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/kristincarolw/1667507776021/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/558354319633039361/IWd6dt31_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pizza Hut</div>
<div style="text-align: center; font-size: 14px;">@kristincarolw</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pizza Hut.
| Data | Pizza Hut |
| --- | --- |
| Tweets downloaded | 2923 |
| Retweets | 527 |
| Short tweets | 413 |
| Tweets kept | 1983 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/999xba5o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kristincarolw's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2osco534) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2osco534/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kristincarolw')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Declan/Reuters_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: unknown
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Asmongold model.ckpt for Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. I've trained using Dreambooth 20 images of twitch streamer Asmongold for the purpose of text-to-image illustration generation using Stable Diffusion.
Feel free to download, use and share the model as you like. To give the Ai the trigger to generate an illustration based on the trained Asmongold images, make sure to use the tag "asmonbald" in your prompts.
Example:
a detailed portrait photo of a man
vs
a detailed portrait photo of asmonbald
---
|
Declan/Reuters_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3635
- Accuracy: 0.8756
- Precision: 0.8761
- Recall: 0.8756
- F1: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.6142 | 1.0 | 1762 | 0.5396 | 0.8022 | 0.8010 | 0.8022 | 0.8014 |
| 0.4911 | 2.0 | 3524 | 0.4588 | 0.8322 | 0.8332 | 0.8322 | 0.8325 |
| 0.4511 | 3.0 | 5286 | 0.4072 | 0.8562 | 0.8564 | 0.8562 | 0.8559 |
| 0.412 | 4.0 | 7048 | 0.3825 | 0.8673 | 0.8680 | 0.8673 | 0.8672 |
| 0.3886 | 5.0 | 8810 | 0.3677 | 0.8745 | 0.8753 | 0.8745 | 0.8745 |
| 0.3914 | 6.0 | 10572 | 0.3635 | 0.8756 | 0.8761 | 0.8756 | 0.8755 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Declan/WallStreetJournal_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
---
### angus mcbride style on Stable Diffusion
This is the `<angus-mcbride-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


















































|
Declan/WallStreetJournal_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
Model trained with images from James Cameron's Avatar movie. Draw avatar characters with facial features of the person indicated at the prompt
### Sample images Will Smith: prompt= portrait Will Smith male, avatar style



### Sample images Johnny Depp: prompt= portrait Johnny Depp male, avatar style


 |
DeepChem/ChemBERTa-5M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: hubert-large-arabic-darija
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-arabic-darija
This model is a fine-tuned version of [asafaya/hubert-large-arabic](https://huggingface.co/asafaya/hubert-large-arabic) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.2.dev0
- Tokenizers 0.13.1
|
DeskDown/MarianMix_en-ja-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
**BlockWorldRTX**
This is the fine-tuned Stable Diffusion model trained on screenshots of Minecraft running in RTX.
Use the tokens **_BlockWorldRTX_** in your prompts for the effect.
**Examples rendered with the model:**

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
DevsIA/Devs_IA | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model linkanjarad/pytorch-code-generator-gpt-neo is restricted and you are not in the authorized list. Visit https://huggingface.co/linkanjarad/pytorch-code-generator-gpt-neo to ask for access. |
Dhritam/Zova-bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Florence Pugh on Stable Diffusion via Dreambooth
#### model by scottisheyebrow
This your the Stable Diffusion model fine-tuned the Florence Pugh concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks Florence Pugh**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
Digakive/Hsgshs | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NEW_OCR_bert-base-en-th-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEW_OCR_bert-base-en-th-cased
This model is a fine-tuned version of [Geotrend/bert-base-en-th-cased](https://huggingface.co/Geotrend/bert-base-en-th-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0519 | 1.0 | 5351 | 0.0141 |
| 0.0131 | 2.0 | 10702 | 0.0116 |
| 0.0105 | 3.0 | 16053 | 0.0105 |
| 0.0087 | 4.0 | 21404 | 0.0102 |
| 0.0074 | 5.0 | 26755 | 0.0103 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Doiman/DialoGPT-medium-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-car_corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-car_corpus
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the Car Corpus Database.
It achieves the following results on the evaluation set:
- Loss: 1.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.799 | 1.0 | 3776 | 1.5830 |
| 0.7419 | 2.0 | 7552 | 1.4930 |
| 0.7245 | 3.0 | 11328 | 1.5187 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | 2022-11-04T07:59:09Z | ---
language:
- en
tags:
- text2text-generation
widget:
- text: "The <extra_id_0> walks in <extra_id_1> park"
example_title: "Masked Language Modeling"
datasets:
- c4
license: apache-2.0
---
# Model Card for Switch Transformers Base - 64 experts

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Switch Transformers Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original Switch Transformers Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto", torch_dtype=torch.float16)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-64")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-64", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
More information needed.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
Waynehillsdev/Wayne_NLP_mT5 | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-11-04T07:59:22Z | ---
language:
- en
tags:
- text2text-generation
widget:
- text: "The <extra_id_0> walks in <extra_id_1> park"
example_title: "Masked Language Modeling"
datasets:
- c4
license: apache-2.0
---
# Model Card for Switch Transformers Base - 128 experts

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All Switch Transformers Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original Switch Transformers Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto", torch_dtype=torch.float16)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-base-128", device_map="auto")
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
More information needed.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-11-04T10:32:46Z | ---
language:
- en
tags:
- text2text-generation
widget:
- text: "The <extra_id_0> walks in <extra_id_1> park"
example_title: "Masked Language Modeling"
datasets:
- c4
inference: false
license: apache-2.0
---
# Model Card for Switch Transformers C - 2048 experts (1.6T parameters for 3.1 TB)

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Training Details](#training-details)
6. [Evaluation](#evaluation)
7. [Environmental Impact](#environmental-impact)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
# TL;DR
Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
As mentioned in the first few lines of the abstract :
> we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
# Usage
Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
Find below some example scripts on how to use the model in `transformers` - bear in mind that the model is **extremely** large, so you may consider using disk offload from `accelerate`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
### Running the model on a GPU using different precisions
#### BP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", torch_dtype=torch.bfloat16, offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google/switch-c-2048")
model = SwitchTransformersForConditionalGeneration.from_pretrained("google/switch-c-2048", device_map="auto", offload_folder=<OFFLOAD_FOLDER>)
input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
>>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
```
</details>
# Uses
## Direct Use and Downstream Use
See the [research paper](https://arxiv.org/pdf/2101.03961.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Ethical considerations and risks
More information needed.
## Known Limitations
More information needed.
## Sensitive Use:
More information needed.
# Training Details
## Training Data
The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
## Training Procedure
According to the model card from the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
# Evaluation
## Testing Data, Factors & Metrics
The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:

For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
## Results
For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@misc{https://doi.org/10.48550/arxiv.2101.03961,
doi = {10.48550/ARXIV.2101.03961},
url = {https://arxiv.org/abs/2101.03961},
author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
albert-xxlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,091 | 2022-11-04T10:59:30Z | ---
tags:
- conversational
---
# Kamui Bastion Chatbot |
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | 2022-11-04T11:11:19Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gogzy/t5-base-finetuned_renre_2021_item1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gogzy/t5-base-finetuned_renre_2021_item1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.0647
- Validation Loss: 4.9004
- Train Rouge1: 14.8649
- Train Rouge2: 8.2192
- Train Rougel: 12.1622
- Train Rougelsum: 14.8649
- Train Gen Len: 19.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 10.3805 | 9.9375 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 0 |
| 9.2108 | 8.9290 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 1 |
| 8.1249 | 7.6832 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 2 |
| 7.3542 | 6.2012 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 3 |
| 6.0647 | 4.9004 | 14.8649 | 8.2192 | 12.1622 | 14.8649 | 19.0 | 4 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-11-04T11:28:50Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-emotion-17-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-emotion-17-labels
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4087
- Accuracy: 0.6495
- F1: 0.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.5228 | 1.0 | 177 | 2.0913 | 0.3364 | 0.3043 |
| 1.9362 | 2.0 | 354 | 1.7457 | 0.4353 | 0.4037 |
| 1.5719 | 3.0 | 531 | 1.4817 | 0.5244 | 0.5068 |
| 1.2558 | 4.0 | 708 | 1.3574 | 0.5654 | 0.5556 |
| 1.024 | 5.0 | 885 | 1.3139 | 0.5880 | 0.5788 |
| 0.8271 | 6.0 | 1062 | 1.3123 | 0.5922 | 0.5856 |
| 0.6645 | 7.0 | 1239 | 1.2887 | 0.6099 | 0.6067 |
| 0.5478 | 8.0 | 1416 | 1.3263 | 0.6226 | 0.6201 |
| 0.442 | 9.0 | 1593 | 1.3239 | 0.6346 | 0.6313 |
| 0.3647 | 10.0 | 1770 | 1.3360 | 0.6276 | 0.6241 |
| 0.2957 | 11.0 | 1947 | 1.3942 | 0.6325 | 0.6280 |
| 0.2534 | 12.0 | 2124 | 1.3962 | 0.6403 | 0.6397 |
| 0.2191 | 13.0 | 2301 | 1.4120 | 0.6417 | 0.6399 |
| 0.1918 | 14.0 | 2478 | 1.3978 | 0.6431 | 0.6427 |
| 0.1728 | 15.0 | 2655 | 1.4087 | 0.6495 | 0.6481 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2022-11-04T11:33:37Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DNABert_K6_G_quad_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNABert_K6_G_quad_1
This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0803
- Accuracy: 0.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0926 | 1.0 | 9375 | 0.0803 | 0.9720 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
bert-base-multilingual-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,749,504 | 2022-11-04T11:37:29Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: nils-nl-to-rx-pt-v7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nils-nl-to-rx-pt-v7
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4389 | 1.0 | 500 | 0.0470 |
| 0.0533 | 2.0 | 1000 | 0.0286 |
| 0.0346 | 3.0 | 1500 | 0.0224 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | 2022-11-04T11:44:42Z | from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") |
bert-large-cased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,214 | 2022-11-04T11:47:03Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="harveymannering/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="harveymannering/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | 2022-11-04T12:20:19Z | ---
tags:
- conversational
---
# Melody DialoGPT Model |
camembert-base | [
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,440,898 | 2022-11-04T12:21:05Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/kueltzho/ddpm-butterflies-128/tensorboard?#scalars)
|
ctrl | [
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
]
| null | {
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17,007 | 2022-11-04T12:36:14Z | ---
tags:
- flair
- token-classification
- sequence-tagger-model
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("GuiGel/meddocan-flair-lstm-crf")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
``` |
distilbert-base-multilingual-cased | [
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,339,633 | 2022-11-04T12:57:57Z | ---
language: fa
license: apache-2.0
tags:
- Farsi
---
# Arnavāz (ارنواز)
**Model Description:** Arnavaz/gpt-arnavaz-beta is gpt2 language model that is fine-tuned using [bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian) pretrained model.
[bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian) has been trained similar to [gpt2-medium](https://huggingface.co/gpt2-medium) with differences in context size, tokenizer and language [(Read more)](https://medium.com/@khashei/a-not-so-dangerous-ai-in-the-persian-language-39172a641c84).
- **Developed by:** [Rezā Latifi](https://rezalatifi.ir)
- **Model Type:** Transformer-based language model
- **Language:** Persian (All characters other than the Persian alphabet are replaced with special tokens)
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [bolbolzaban/gpt2-persian](https://huggingface.co/bolbolzaban/gpt2-persian), [gpt2-medium](https://huggingface.co/gpt2-medium)
- **Resources for more information:**
- [Arnavaz Website](https://openai.com/blog/better-language-models/)
## How to utilize
Using a pipeline for text generation, Arnavaz can be utilized like this:
```python
from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel, AutoConfig
tokenizer = AutoTokenizer.from_pretrained('Arnavaz/gpt2-arnavaz-beta')
model = GPT2LMHeadModel.from_pretrained('Arnavaz/gpt2-arnavaz-beta')
config = AutoConfig.from_pretrained('Arnavaz/gpt2-arnavaz-beta', max_length=512)
generator = pipeline('text-generation', model, tokenizer=tokenizer, config=config)
def getEloquent(ineloquent):
result = generator(f"[BOS]{ineloquent}[SEP]")[0]['generated_text']
return result[result.find('[SEP]')+5:]
sample = getEloquent('استفاده از کاغذ پاپیروس برای نوشتن کتاب از حدود دو هزار سال قبل از میلاد در مصر رایج شد.')
```
|
distilbert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10,887,471 | 2022-11-04T13:00:44Z | ---
language: vi
datasets:
- youtube-vi-13k-hours
tags:
- speech
license: cc-by-nc-4.0
---
# Vietnamese Self-Supervised Learning Wav2Vec2 model
## Model
We use wav2vec2 architecture for doing Self-Supervised learning
<img src="https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/wav2vec2.png" width=75% height=75%>
## Data
Our self-supervised model is pre-trained on a massive audio set of 13k hours of Vietnamese youtube audio, which includes:
- Clean audio
- Noise audio
- Conversation
- Multi-gender and dialects
## Download
We have already upload our pre-trained model to the Huggingface. The base model trained 35 epochs and the large model trained 20 epochs in about 30 days using TPU V3-8.
- [Based version](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vi) ~ 95M params
- [Large version](https://huggingface.co/nguyenvulebinh/wav2vec2-large-vi) ~ 317M params
## Usage
```python
from transformers import Wav2Vec2ForPreTraining, Wav2Vec2Processor
model_name = 'nguyenvulebinh/wav2vec2-base-vi'
# model_name = 'nguyenvulebinh/wav2vec2-large-vi'
model = Wav2Vec2ForPreTraining.from_pretrained(model_name)
processor = Wav2Vec2Processor.from_pretrained(model_name)
```
Since our model has the same architecture as the English wav2vec2 version, you can use [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
## Finetuned version
### VLSP 2020 ASR dataset
Benchmark WER result on VLSP T1 testset:
| | [base model](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vi-vlsp2020) | [large model](https://huggingface.co/nguyenvulebinh/wav2vec2-large-vi-vlsp2020) |
|---|---|---|
|without LM| 8.66 | 6.90 |
|with 5-grams LM| 6.53 | 5.32 |
Usage
```python
#pytorch
#!pip install transformers==4.20.0
#!pip install https://github.com/kpu/kenlm/archive/master.zip
#!pip install pyctcdecode==0.4.0
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-base-vi-vlsp2020"
# model_name = "nguyenvulebinh/wav2vec2-large-vi-vlsp2020"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="t2_0000006682.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
```
## Acknowledgment
- We would like to thank the Google TPU Research Cloud (TRC) program and Soonson Kwon (Google ML Ecosystem programs Lead) for their support.
- Special thanks to my colleagues at [VietAI](https://vietai.org/) and [VAIS](https://vais.vn/) for their advice.
## Contact
[email protected] / [email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
AIDynamics/DialoGPT-medium-MentorDealerGuy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-11-04T21:47:16Z | ---
language:
- nn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-npsc
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: 16K_mp3_bokmaal
split: train
args: 16K_mp3_bokmaal
metrics:
- name: Wer
type: wer
value: 12.925418803583286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-npsc
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2028
- Wer: 12.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3922 | 0.18 | 500 | 0.3975 | 24.2055 |
| 0.2893 | 0.36 | 1000 | 0.3139 | 20.1507 |
| 0.2471 | 0.54 | 1500 | 0.2733 | 17.4449 |
| 0.2159 | 0.72 | 2000 | 0.2488 | 16.2681 |
| 0.2195 | 0.89 | 2500 | 0.2304 | 15.0577 |
| 0.1178 | 1.07 | 3000 | 0.2245 | 14.5968 |
| 0.1099 | 1.25 | 3500 | 0.2183 | 14.1118 |
| 0.1059 | 1.43 | 4000 | 0.2136 | 13.7914 |
| 0.1156 | 1.61 | 4500 | 0.2072 | 13.7491 |
| 0.1025 | 1.79 | 5000 | 0.2034 | 13.1515 |
| 0.1123 | 1.97 | 5500 | 0.2006 | 13.0284 |
| 0.0734 | 2.15 | 6000 | 0.2028 | 12.9254 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ARTeLab/it5-summarization-mlsum | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/mlsum-it",
"transformers",
"summarization",
"autotrain_compatible",
"has_space"
]
| summarization | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | 2022-11-05T01:09:38Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1495719135858233345/0T3aMUoa_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dani little ponie 🏳️⚧️🐀</div>
<div style="text-align: center; font-size: 14px;">@00daniponie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dani little ponie 🏳️⚧️🐀.
| Data | dani little ponie 🏳️⚧️🐀 |
| --- | --- |
| Tweets downloaded | 3227 |
| Retweets | 1904 |
| Short tweets | 56 |
| Tweets kept | 1267 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cbrld7j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @00daniponie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/39w151kw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/39w151kw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/00daniponie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AT/distilgpt2-finetuned-wikitext2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-05T02:56:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0046
- Wer: 116.8945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5232 | 4.95 | 1000 | 3.6227 | 127.2695 |
| 0.0538 | 9.9 | 2000 | 4.3761 | 125.3417 |
| 0.0166 | 14.85 | 3000 | 4.6306 | 114.6863 |
| 0.0008 | 19.8 | 4000 | 4.7625 | 116.3687 |
| 0.0022 | 24.75 | 5000 | 4.9290 | 116.0182 |
| 0.0002 | 29.7 | 6000 | 4.9100 | 118.2264 |
| 0.0001 | 34.65 | 7000 | 4.9886 | 116.5089 |
| 0.0001 | 39.6 | 8000 | 5.0046 | 116.8945 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0+cu116
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AdapterHub/bert-base-uncased-pf-conll2000 | [
"bert",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"adapter-transformers",
"token-classification",
"adapterhub:chunk/conll2000"
]
| token-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-11-05T10:51:25Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable diffusion
- text-to-image
---
MEGA MERGE DIFF (MMD) VERSION 1-18. 18 MERGED MODELS IN ONE
ANNOUNCEMENT:
- DUE TO THE FACT THAT I CANNOT SEEM TO CATCH A BREAK AND GET SOME TIME TO ASSUAGE MY OWN INSECURITIES ABOUT THE QUALITY OF THIS MODEL, I AM JUST GOING TO RELEASE IT.
FIRST MODEL RELEASE: MMD V1-18 MODEL MERGE ALPHA:
- DISCORD INVITE: https://discord.gg/WdmejvKCDG (EDIT 11/6, WILL NOT EXPIRE ANYMORE)
- MODEL FAQ, MERGING METHODOLOGY, BORING CHARACTER BACKSTORY: https://discord.com/channels/900672465276116994/1035853968225615922
- LIST OF MERGED MODELS (DUE TO THE NATURE OF SOME OF THE MERGED MODELS, I CANNOT LIST THEM HERE): https://discord.com/channels/900672465276116994/1035895704377368687
- DOWNLOAD LINK: https://huggingface.co/ShinCore/MMDv1-18/tree/main
SUMMARY:
MMD V1-18 A MEGA MERGE OF SD 1.5 AND 17 OTHER MODELS. IT IS INTENDED TO BE A GENERALIST MODEL, NOT FOCUSED ON ANY SINGLE GENRE OR CATEGORY OR STYLE OR SUBJECT. THERE ARE ALREADY A PROLIFERATION OF GREAT MODELS OUT THERE, COVERING A BROAD SPECTRUM OF CONTENT. HOWEVER, THAT ALSO CAUSES A PROBLEM IN THAT WE HAVE A PROFLIERATION OF GREAT MODELS OUT THERE THAT DO ONE OR TWO THINGS REALLY WELL, BUT THATS KINDA IT. OTHER THAN THOSE ONE OR TWO THINGS THAT HAVE BEEN ADDED TO THE BASE SD MODEL, ITS NO DIFFERENT THAN ANY OF THE OTHER MODELS.
MMD WAS CREATED TO ADDRESS THE ISSUE OF DISORGANIZED CONTENT FRAGMENTATION ACROSS HUGGINGFACE, DISCORD, REDDIT, RENTRY.ORG, 4CHAN, AND THE REMAINDER OF THE INTERNET. IT ALSO TRIES TO ADDRESS THE ISSUES INHERENT WITH THE BASE SD 1.5 MODEL. NAMELY, PROBLEMATIC ANATOMY, LACK OF RESPONSIVENESS TO PROMPT ENGINEERING, BLAND OUTPUTS, ETC.
THE CURRENT SET OF MERGED MODELS ARE A CROSS SECTION OF MODELS THAT I FEEL IMPROVE AND ENRICH THE BASE MODEL. IN MY TESTS (WHICH YOU CAN TAKE A LOOK AT THE LOGS IN MY #EXPERIMENTS CHANNEL IN THE DISCORD I WORK OUT OF), THE MERGING OF A SPECIFIC SET OF MODELS HAS SHOWN TO IMPROVE HUMAN ANATOMY COHERENCY, INCREASE CREATIVITY AND DETAIL IN BOTH FORE/BACKGROUNDS, AND CAN BE MORE RESPONSIVE TO PROMPTING (PLEASE SEE PROMPTING NOTE BELOW).
- DOWNSIDE: TRIGGER TERMS ASSOCIATED WITH SPECIFIC MODELS HAVE A DRASTICALLY REDUCED EFFECT. IF USING A TRIGGER TERM ASSOCIATED WITH A SPECIFIC MODEL, YOU MUST INCREASE THE STR TO SEE ANY EFFECT. THE MODEL CAN ALSO BE MUCH MORE SENSITIVE TO THE SETTINGS THAT YOU USE. I HAVE LISTED SOME RECOMMENDATIONS BELOW.
IMPORTANT: I DO NOT, IN ANY WAY, SHAPE OR FORM, CLAIM THAT THIS MODEL IS SUPERIOR TO ANY OTHER MODEL OUT THERE. NOR DO I FEEL THAT I AM SOMEHOW SOME KIND OF SD GURU AND AM AN EXPERT OF ANY KIND. MY INTENTIONS IN CREATING THIS MODEL IS FOR MY OWN PERSONAL GOAL OF USING IT, AS WELL AS OTHER AI TOOLS, TO CREATE A STREAMLINED WORKFLOW PIPELINE THAT WILL ENABLE INDIE SOLO GAME DEVS TO CREATE GAMES WITH GREATER EASE AND EFFICIENCY. I DREAM OF SOMEDAY BEING A SOLO INDIE GAME DEV, AND THIS IS MY WAY OF HEADING TOWARDS THAT GOAL IN AN INDIRECT FASHION.
I AM NOT A GENIUS. I AM NOT EVEN A GODDAMN CODER/PROGRAMMER/MATHMETICIAN. I AM COMPLETELY OUT OF MY DEPTH. I SOMETIMES FEEL LIKE THAT ZOOLANDER MEME, HOOTING AND POKING AT A COMPUTER THAT IS BEYOND MY LIMITED COMPREHENSION. I AM NOTHING MORE THAN AN OLD, TIRED, LAZY, AND GRUMPY BASTARD WHO IS SPENDING WHAT LITTLE FREE TIME I HAVE TRYING TO FIGURE THIS CRAP OUT SO THAT I DONT HAVE TO LEARN HOW TO CREATE A GAME FROM SCRATCH. I JUST WANT AN AI TO DO IT FOR ME.
NOTES ABOUT USAGE:
- MODEL CAN BE A BIT HARSH AND RIGID, BUT PUT OUT SOME AMAZING GENS. 2ND RELEASE WILL BE LESS EXTREME.
RECOMMENDED SETTINGS:
- IMG HEIGHT/WIDTH MUST BE SET TO A MULTIPLE OF 128
- SET HIRES FIX TO ON. SET FIRST PASS HEIGHT/WIDTH TO HALF OF IMG HEIGHT/WIDTH.
- PLAY WITH DENOISING STR, I SET MINE TO .69, YMMV.
- MODEL IS SENSITIVE TO CFG. I USUALLY USE 12.5, BUT OTHERS HAVE REPORTED BETTER OUTPUTS AT LOWER/HIGHER VALUES. TRY THEM OUT. DONT BE AFRAID OF GOING REALLY HIGH OR REALLY LOW.
- SET RESIZE SEED OPTION TO 512X512. (DDIM RESPONDS BETTER TO THIS SETTING. OTHER SAMPLERS MAY NOT. TURN ON AND OFF AND COMPARE)
PROMPTING:
- MERGED MODELS USE A COMBO OF STANDARD SD BLIP/CLIP AND DANBOORU TAGS. I USE BOTH IN MY PROMPTS. TRY USING BOTH CLIP AND DANBOORU INTERROGATOR ON IMAGES, GET THE RESULTS FROM BOTH, AND USE THEM IN YOUR OWN PROMPTING. THEY SEEM TO REINFORCE EACH OTHER, THOUGH THIS IS DIFFICULT TO SCIENTIFICALLY VERIFY. I HAVE SEVERAL THEORIES THAT I INTEND TO TEST OUT, AS TIME PERMITS
- USE NEGATIVE PROMPTING WITHOUT RESERVATIONS. YES, I HAVE SEEN STATEMENTS TO THE EFFECT THAT NEGATIVE PROMPTING IS LIKE A PLACEBO. ALL I CAN TELL YOU IS THAT, WHEN I REMOVED ALL OF MY NEGATIVE PROMPTS, MY GENS TURNED INTO A HORRORSHOW.
I COULD NOT PUT THEM BACK ON FAST ENOUGH.
2ND RELEASE, MMD V1-18 MODEL MERGE (TONED DOWN) ALPHA:
- THIS IS THE SAME AS THE FIRST RELEASE,BUT I MERGED BACK IN 25% OF SD 1.5. MORE FORGIVING, LESS EXTREME EDITION.
EVERYTHING ELSE IS THE SAME.
FINAL NOTE: IT HAS BEEN POINTED OUT TO ME THAT I AM USING ALL CAPS, AS IF I WAS SOMEHOW NOT ALREADY AWARE. AS I HAVE PREVIOUSLY POINTED OUT, I AM AN OLD, TIRED, LAZY AND GRUMPY BASTARD. MY PREVIOUS AND CURRENT PROFESSION HAS GOTTEN ME INTO THE HABIT OF USING ALL CAPS WHEN WORKING. DONT ASK ME THE REASONS, THEY ARE STUPID. AS IN, OTHER PEOPLE THAT I DEAL WITH. I AM PERFECTLY CAPABLE OF USING PROPER CAPITIALIZATION. I CHOOSE NOT TO WHEN I AM FOCUSED, GRINDING AWAY AT SOMETHING, HASTILY TRYING TO FINISH SOMETHING, OR AM IN A BAD MOOD BECAUSE SOMETHING IS IRRITATING ME.
SO BASICALLY, PRETTY MUCH ALL THE TIME.
I APOLOGIZE, BUT YOUR ARE GOING TO HAVE TO DEAL WITH IT.
|
AdapterHub/roberta-base-pf-wic | [
"roberta",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:wordsence/wic"
]
| text-classification | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: unknown
---
# Age estimation in supermarkets
The model analyzed in this card estimates someone's age. This project has been done for the master Applied Artificial Intelligence and is about estimating ages in supermarkets when a person wants to buy alcohol. This model's goal is to only estimate ages in an image. It will not cover ethnicities or gender.
## Model description
**Used dataset:** UTKFace images
- This dataset contains roughly 24K face images.
- The age of a person on the picture is labeled in the filename of that image.
- Since we do not have use for baby images, we decided to cut these out of the dataset, so there are 21K images left.
**Model input:** Facial images
**Model output:** For a face in a picture, the model will return the estimated age of that person. The model output also gives a confidence score for the estimation.
**Model architecture:** A Convolutional Neural Network. This CNN will perform a regression analysis to estimates the ages.
## Performance
To determine the performance of the model, the following metrics have been used:
- MSE, this metric measures how close the regression line is to the data points.
<br>   - *Our model's MSE:* 60.9
- RMSE, this metric measures the mean error that can be made.
<br>   - *Our model's RMSE:* 7.8
- MAE, this is a measure for model accuracy. The MAE is the average error that the model's predictions have in comparison with their corresponding actual targets.
<br>   - *Our model's MAE:* 5.2
Ideally, the RMSE and the MAE should be close to each other. When there is a big difference in these two numbers, it is an indication of variance in the individually errors.
Our results show that the prediction model can be around 8 years off of the actual age of a person.
We also looked at how the model performs in different age, gender and race classes. It seemed the model predicted the ages of people between 20 and 30 better than the rest. The model could also predict the ages of females better than males. The race that the model can predict the best is East Asian.
## Limitations
- **Lighting**
<br> When the lighting is poor, the age estimation can be poor as well
- **Occlusion**
<br> Partially hidden or obstructed faces might not be detected. (e.g. face masks)
- **UTKFace**
<br> The ages in this dataset are in itself estimation from a previous model. Since we do not know the exact ages of the people in the images, our model will not be the most reliable.
## Training and evaluation data
Train data: 70%
Test data: 30%
Our model has been made by trial and error. The following architecture is the outcome:
- Hidden layers: 7
- Batch size: 128
- Epochs: 65
- Optimizer: adam
- Activation: ReLu & Linear
|
AdharshJolly/HarryPotterBot-Model | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 174.96 +/- 12.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AdrianGzz/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- diabetes_detection
model-index:
- name: diabetes_detection_fixed2
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: diabetes_detection
name: diabetes_detection
metrics:
- type: accuracy
name: Accuracy
value: 0.78125
- type: loss
name: Model loss
value: 0.523585319519043
---
# classification model trained on "diabetes_detection"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_fixed2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
Adrianaforididk/Jinx | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- diabetes_detection
model-index:
- name: diabetes_detection_fixed3
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: diabetes_detection
name: diabetes_detection
metrics:
- type: accuracy
name: Accuracy
value: 0.78125
- type: loss
name: Model loss
value: 0.523585319519043
---
# classification model trained on "diabetes_detection"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/diabetes_detection_fixed3) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
Aftabhussain/Tomato_Leaf_Classifier | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index",
"autotrain_compatible"
]
| image-classification | {
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 50 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Iwillbeback/ddpm-butterflies-128/tensorboard?#scalars)
|
Ahda/M | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/aeronautblue/1667684473479/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1515688111526891521/o_3LoG40_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">blue</div>
<div style="text-align: center; font-size: 14px;">@aeronautblue</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from blue.
| Data | blue |
| --- | --- |
| Tweets downloaded | 2373 |
| Retweets | 460 |
| Short tweets | 379 |
| Tweets kept | 1534 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/e1wsp7qa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aeronautblue's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/61928z1e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/61928z1e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aeronautblue')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ahmedahmed/Wewe | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
- imagenet-12k
---
# Model card for vit_base_patch32_clip_224.laion2b_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.2
- GMACs: 4.4
- Activations (M): 4.2
- Image size: 224 x 224
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- LAION-2B
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch32_clip_224.laion2b_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch32_clip_224.laion2b_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Ahren09/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- laion-2b
- imagenet-12k
---
# Model card for vit_base_patch32_clip_384.laion2b_ft_in12k_in1k
A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.3
- GMACs: 12.7
- Activations (M): 12.1
- Image size: 384 x 384
- **Papers:**
- OpenCLIP: https://github.com/mlfoundations/open_clip
- Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:**
- LAION-2B
- ImageNet-12k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_base_patch32_clip_384.laion2b_ft_in12k_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_base_patch32_clip_384.laion2b_ft_in12k_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 145, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@article{cherti2022reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
journal={arXiv preprint arXiv:2212.07143},
year={2022}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
AimB/mT5-en-kr-aihub-netflix | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1643
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 |
| 0.1472 | 2.0 | 1430 | 0.1633 | 0.8488 |
| 0.0948 | 3.0 | 2145 | 0.1643 | 0.8626 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AimB/mT5-en-kr-opus | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8325761399966348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2978
- F1: 0.8326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.574 | 1.0 | 191 | 0.3495 | 0.7889 |
| 0.2649 | 2.0 | 382 | 0.2994 | 0.8242 |
| 0.1716 | 3.0 | 573 | 0.2978 | 0.8326 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Akashpb13/Swahili_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sw",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/ibdwssbm-kodorinssb-tsm_leffen/1667697159635/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560338805445611521/SwRxF60m_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1499195152639926276/t4_WbYMx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513270656196292608/t2voAbPh_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TSM FTX Leffen & Panda | iBDW (Cody Schwab) & FLY | KoDoRiN</div>
<div style="text-align: center; font-size: 14px;">@ibdwssbm-kodorinssb-tsm_leffen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TSM FTX Leffen & Panda | iBDW (Cody Schwab) & FLY | KoDoRiN.
| Data | TSM FTX Leffen | Panda | iBDW (Cody Schwab) | FLY | KoDoRiN |
| --- | --- | --- | --- |
| Tweets downloaded | 3244 | 3249 | 3048 |
| Retweets | 301 | 493 | 479 |
| Short tweets | 335 | 235 | 275 |
| Tweets kept | 2608 | 2521 | 2294 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7pksc1xu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ibdwssbm-kodorinssb-tsm_leffen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19lbljqq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19lbljqq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ibdwssbm-kodorinssb-tsm_leffen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Akashpb13/xlsr_hungarian_new | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
datasets:
- sagawa/pubchem-10m-canonicalized
metrics:
- accuracy
model-index:
- name: PubChem-10m-t5
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/pubchem-10m-canonicalized
type: sagawa/pubchem-10m-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9189779162406921
---
# PubChem-10m-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/pubchem-10m-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9190
## Model description
We trained t5 on SMILES from PubChem using the task of masked-language modeling (MLM). Compared to PubChem-10m-t5, PubChem-10m-t5-v2 uses a character-level tokenizer, and it was also trained on PubChem.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
## Training and evaluation data
We downloaded [PubChem data](https://drive.google.com/file/d/1ygYs8dy1-vxD1Vx6Ux7ftrXwZctFjpV3/view) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 9999960, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2592 | 100000 | 0.8997 | 0.2784 |
| 0.2790 | 200000 | 0.9095 | 0.2468 |
| 0.2278 | 300000 | 0.9162 | 0.2256 | |
Akiva/Joke | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
datasets:
- sagawa/ZINC-canonicalized
metrics:
- accuracy
model-index:
- name: ZINC-deberta
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: sagawa/ZINC-canonicalized
type: sagawa/ZINC-canonicalized
metrics:
- name: Accuracy
type: accuracy
value: 0.9475839734077454
---
# ZINC-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/microsoft/deberta-base) on the sagawa/ZINC-canonicalized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1228
- Accuracy: 0.9476
## Model description
We trained t5 on SMILES from ZINC using the task of masked-language modeling (MLM). Compared to ZINC-t5, ZINC-t5-v2 uses a character-level tokenizer, and it was also trained on ZINC.
## Intended uses & limitations
This model can be used for the prediction of molecules' properties, reactions, or interactions with proteins by changing the way of finetuning.
As an example, We finetuned this model to predict products. The model is [here](https://huggingface.co/sagawa/ZINC-t5-productpredicition), and you can use the demo [here](https://huggingface.co/spaces/sagawa/predictproduct-t5).
Using its encoder, we trained a regression model to predict a reaction yield. You can use this demo [here](https://huggingface.co/spaces/sagawa/predictyield-t5).
## Training and evaluation data
We downloaded [ZINC data](https://drive.google.com/drive/folders/1lSPCqh31zxTVEhuiPde7W3rZG8kPgp-z) and canonicalized them using RDKit. Then, we dropped duplicates. The total number of data is 22992522, and they were randomly split into train:validation=10:1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-03
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Step | Accuracy | Validation Loss |
|:-------------:|:------:|:--------:|:---------------:|
| 0.2090 | 100000 | 0.9264 | 0.1860 |
| 0.1628 | 200000 | 0.9349 | 0.1613 |
| 0.1632 | 300000 | 0.9395 | 0.1467 |
| 0.1451 | 400000 | 0.9435 | 0.1345 |
| 0.1311 | 500000 | 0.9465 | 0.1261 | |
Akjder/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
---
### Smurf Style on Stable Diffusion
This is the `<smurfy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
AkshatSurolia/DeiT-FaceMask-Finetuned | [
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| image-classification | {
"architectures": [
"DeiTForImageClassification"
],
"model_type": "deit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bio_ClinicalBERT-finetuned-20pc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-finetuned-20pc
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3213
- Accuracy: 0.8580
- F1: 0.4390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 41 | 1.0399 | 0.8642 | 0.45 |
| No log | 2.0 | 82 | 1.1412 | 0.8519 | 0.4 |
| No log | 3.0 | 123 | 1.2759 | 0.8642 | 0.45 |
| No log | 4.0 | 164 | 1.2953 | 0.8519 | 0.5385 |
| No log | 5.0 | 205 | 1.3213 | 0.8580 | 0.4390 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Akuva2001/SocialGraph | [
"has_space"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-mlm-feedback-512
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mlm-feedback-512
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6086 | 1.0 | 380 | 2.0284 |
| 2.4595 | 2.0 | 760 | 2.1917 |
| 2.41 | 3.0 | 1140 | 2.7014 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AlErysvi/Erys | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- text-generation
- quotes
- quote
- generated_from_trainer
model-index:
- name: jrtec-gpt2-text-generation-quotes-jonathan-vargas
results: []
widget:
- text: "life: "
example_title: "Life quote"
- text: "death: "
example_title: "Death quote"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jrtec-gpt2-text-generation-quotes-jonathan-vargas
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7463 | 1.71 | 500 | 0.7033 |
| 0.4281 | 3.41 | 1000 | 0.7084 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AlanDev/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: zh
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- zh
- Chinese
inference: false
extra_gated_prompt: |-
One more step before getting this model.
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. rinna Co., Ltd. claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
By clicking on "Access repository" below, you accept that your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
# Chinese Stable Diffusion Model Card
<!--

-->
svjack/Stable-Diffusion-FineTuned-zh-v0 is a Chinese-specific latent text-to-image diffusion model capable of generating images given any Chinese text input.
This model was trained by using a powerful text-to-image model, [diffusers](https://github.com/huggingface/diffusers)
For more information about our training method, see [train_zh_model.py](https://github.com/svjack/Stable-Diffusion-Chinese-Extend/blob/main/train_zh_model.py).
With the help of a good baseline model [Taiyi-Stable-Diffusion-1B-Chinese-v0.1](IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) from [IDEA-CCNL](https://github.com/IDEA-CCNL/Fengshenbang-LM)
<!--
[](https://colab.research.google.com/github/rinnakk/japanese-stable-diffusion/blob/master/scripts/txt2img.ipynb)
-->
## Model Details
- **Developed by:** Zhipeng Yang
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** Chinese
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model (LDM)](https://arxiv.org/abs/2112.10752) that used [Stable Diffusion](https://github.com/CompVis/stable-diffusion) as a pre-trained model.
- **Resources for more information:** [https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)
## Examples
Firstly, install our package as follows. This package is modified [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Chinese Stable Diffusion.
```bash
diffusers==0.6.0
transformers
torch
datasets
accelerate
sentencepiece
```
Run this command to log in with your HF Hub token if you haven't before:
```bash
huggingface-cli login
```
Running the pipeline with the LMSDiscreteScheduler scheduler:
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained("svjack/Stable-Diffusion-FineTuned-zh-v0")
pipeline.safety_checker = lambda images, clip_input: (images, False)
pipeline = pipeline.to("cuda")
prompt = '女孩们打开了另一世界的大门'
image = pipeline(prompt, guidance_scale=7.5).images[0]
```
### Generator Results comparison
[https://github.com/svjack/Stable-Diffusion-Chinese-Extend](https://github.com/svjack/Stable-Diffusion-Chinese-Extend)




<!--
_Note: `JapaneseStableDiffusionPipeline` is almost same as diffusers' `StableDiffusionPipeline` but added some lines to initialize our models properly._
## Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1._
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with Japanese captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a subset of a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Japanese Stable Diffusion was trained on Japanese datasets including [LAION-5B](https://laion.ai/blog/laion-5b/) with Japanese captions,
which consists of images that are primarily limited to Japanese descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model.
Further, the ability of the model to generate content with non-Japanese prompts is significantly worse than with Japanese-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
We used the following dataset for training the model:
- Approximately 100 million images with Japanese captions, including the Japanese subset of [LAION-5B](https://laion.ai/blog/laion-5b/).
**Training Procedure**
Japanese Stable Diffusion has the same architecture as Stable Diffusion and was trained by using Stable Diffusion. Because Stable Diffusion was trained on English dataset and the CLIP tokenizer is basically for English, we had 2 stages to transfer to a language-specific model, inspired by [PITI](https://arxiv.org/abs/2205.12952).
1. Train a Japanese-specific text encoder with our Japanese tokenizer from scratch with the latent diffusion model fixed. This stage is expected to map Japanese captions to Stable Diffusion's latent space.
2. Fine-tune the text encoder and the latent diffusion model jointly. This stage is expected to generate Japanese-style images more.
[//]: # (_Note: Japanese Stable Diffusion is still running and this checkpoint is the current best one. We might update to a better checkpoint via this repository._)
--> |
Alberto15Romero/GptNeo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/alexabliss_wwe/1667711162135/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1271821102134833153/krgeswcX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lexi (Kaufman) Cabrera</div>
<div style="text-align: center; font-size: 14px;">@alexabliss_wwe</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lexi (Kaufman) Cabrera.
| Data | Lexi (Kaufman) Cabrera |
| --- | --- |
| Tweets downloaded | 3184 |
| Retweets | 1160 |
| Short tweets | 399 |
| Tweets kept | 1625 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hgwztvb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alexabliss_wwe's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vlezdiv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alexabliss_wwe')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Aleksandra/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
This model is a work in progress. It isn't perfect, but it can generate some nice images. I have trained Xaela (dark horns and scales) and Raen (light horns and scales) as separate concepts, which will allow you to specify which you would like in your image by using the tokens specified below.
I hope you enjoy it as much as I have! I'd love to hear your feedback.
Waifu-Diffusion-v1-3 based StableDiffusion model with Dreambooth training on images from many different artists. The model is trained to 11,000 steps on 80 different images of Au Ra females, a playable race of people with dragon-like horns, and patches of scales from the critically acclaimed MMORPG Final Fantasy XIV (Have you heard about the games free trial btw?).
## Usage
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use ```"m_arxla"``` for Xaela clan Au Ra or ```"m_arrn"``` for Raen clan Au Ra in your prompt to invoke the style of the desired clan.
## Recommended negative prompt
```"poorly drawn, bad quality, colored skin"```
You can also add the following to your negative prompt in order to steer your output towards the WD1.3 default caucasian skintone: ```"blue skin, purple skin"```
If you are generating Raen Au Ra I highly recommend also adding ```"black scales"``` to your negative prompt as the AI will often draw dark scales without it.
I do NOT recommend adding the common negative prompt tags such as ```"bad anatomy, disfigured, deformed, gross, etc..."```
## Example prompt
```"m_arrn, 1girl, light smile, detailed eyes, extremely detailed face, sidelocks, black hair, grey eyes, long hair, hair clip, collarbone, tank top, shorts, looking to the side, highly detailed face, extremely detailed, intricate, best quality, ultra realistic, cowboy shot, holding shopping bags at the mall, highly detailed background"```
Negative Prompt: ```"poorly drawn, bad quality, colored skin, blue skin, purple skin, midriff, black scales"```
Sampler: DPM++ 2S a, Sampling Steps: 22, CFG scale: 11, H: 768, W: 512
## Xaela example images using ```"m_arxla"```
<table>
<tr>
<td><img src=https://i.imgur.com/gvgZeT1.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/wWFDxCS.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/yzWhulJ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/TjTIbIz.png width=100% height=100%/></td>
</tr>
</table>
## Raen example images using ```"m_arrn"```
<table>
<tr>
<td><img src=https://i.imgur.com/jwoWZWE.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/k1XPAZI.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/MvcSlAd.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/3PLuE1V.png width=100% height=100%/></td>
</tr>
</table>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AlekseyKorshuk/bert | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- pytorch
- diffusers
---
# Multi-instrument Music Synthesis with Spectrogram Diffusion
[Spectrogram Diffusion](https://arxiv.org/abs/2206.05408) by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel.
## Abstract
An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes.
<img src="https://storage.googleapis.com/music-synthesis-with-spectrogram-diffusion/architecture.png" alt="Architecture diagram">
## Example usage
```python
from diffusers import SpectrogramDiffusionPipeline
pipe = SpectrogramDiffusionPipeline.from_pretrained("kashif/music-spectrogram-diffusion")
pipe = pipe.to("cuda")
output = pipe("beethoven_hammerklavier_2.mid")
audio = output.audios[0]
``` |
Alexander-Learn/bert-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- Avatar
- Avatar The Way of Water
- film
- James Cameron
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/riccardogiorato/avatar-diffusion/resolve/main/assets/avatartwow.png" width="512" height="512"/></center>

# Avatar Diffusion
An AI model that generates artwork with Avatar style!
Based of a finetuned Stable Diffusion V1.5, trained in Dreambooth with more than 50 images from the latest trailer Avatar: The Way of Water.
By [riccardogiorato](https://twitter.com/riccardogiorato)
> **Note**: To get the Avatar styles, use the **avatartwow style** keyword in your prompt.
>
> **Don't use** the **avatar** keyword, because it's already used by the original model but full of messy data.
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "riccardogiorato/avatar-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a magical witch with blue hair with avatartwow style"
image = pipe(prompt).images[0]
image.save("./magical_witch.png")
```
# **👇Model👇**
AI Model Weights available at huggingface: https://huggingface.co/riccardogiorato/avatar-diffusion
# Usage
After model loaded, use keyword **avatartwow** in your prompt or even better **avatartwow style**.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Alexander-Learn/bert-finetuned-squad-accelerate | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi_news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
args: default
metrics:
- name: Rouge1
type: rouge
value: 38.5318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-multi_news
This model is a fine-tuned version of [mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization](https://huggingface.co/mrm8488/bert-small2bert-small-finetuned-cnn_daily_mail-summarization) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3760
- Rouge1: 38.5318
- Rouge2: 12.7285
- Rougel: 21.4358
- Rougelsum: 33.4565
- Gen Len: 128.985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 4.6946 | 0.89 | 400 | 4.5393 | 37.164 | 11.5191 | 20.2519 | 32.1568 | 126.415 |
| 4.5128 | 1.78 | 800 | 4.4185 | 38.2345 | 12.2053 | 20.954 | 33.0667 | 128.975 |
| 4.2926 | 2.67 | 1200 | 4.3866 | 38.4475 | 12.6488 | 21.3046 | 33.2768 | 129.0 |
| 4.231 | 3.56 | 1600 | 4.3808 | 38.7008 | 12.6323 | 21.307 | 33.3693 | 128.955 |
| 4.125 | 4.44 | 2000 | 4.3760 | 38.5318 | 12.7285 | 21.4358 | 33.4565 | 128.985 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Alexander-Learn/bert-finetuned-squad | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | https://zencastr.com/VeR-HD-Sin-novedad-en-el-frente-2022-Online-Espanol-Latino-REPELIS
https://zencastr.com/REPELIS-VeR-El-cuarto-pasajero-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-Los-renglones-torcidos-de-Dios-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-Amsterdam-2022-Online-Pelicula-ompleta-y-HD
https://zencastr.com/REPELIS-VeR-One-Piece-Film-Red-2022-Online-Pelicula-ompleta-y-HD
|
AndyJ/prompt_finetune | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yunan/ddpm-butterflies-128/tensorboard?#scalars)
|
AndyyyCai/bert-base-uncased-finetuned-copa | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
]
| multiple-choice | {
"architectures": [
"BertForMultipleChoice"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BartConditionalGeneration-bart-large-finetuned-insult
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BartConditionalGeneration-bart-large-finetuned-insult
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.6217 | 1.0 | 568 | 4.5864 |
| 4.7444 | 2.0 | 1136 | nan |
| 4.2308 | 3.0 | 1704 | 3.7590 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/AR_declutr | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | use laikafy for the model to kick in. I suggest to use [laikafy:10] otherwise it often generates the same model, instead putting the token between brackets followed by :10 will start to use the token after 10 samples. For that reason I usually put 50 samples |
AnonymousSub/AR_rule_based_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_DistilBERT_5EE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_DistilBERT_5EE
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2023
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6748 | 0.03 | 50 | 0.5955 | 0.88 |
| 0.4404 | 0.06 | 100 | 0.2853 | 0.9 |
| 0.3065 | 0.1 | 150 | 0.2208 | 0.9 |
| 0.3083 | 0.13 | 200 | 0.2023 | 0.9333 |
| 0.2922 | 0.16 | 250 | 0.1530 | 0.94 |
| 0.2761 | 0.19 | 300 | 0.2035 | 0.9267 |
| 0.2145 | 0.22 | 350 | 0.2450 | 0.9 |
| 0.258 | 0.26 | 400 | 0.1680 | 0.9267 |
| 0.2702 | 0.29 | 450 | 0.1607 | 0.9333 |
| 0.2587 | 0.32 | 500 | 0.1496 | 0.9467 |
| 0.2822 | 0.35 | 550 | 0.1405 | 0.9333 |
| 0.2538 | 0.38 | 600 | 0.1396 | 0.9467 |
| 0.2707 | 0.42 | 650 | 0.1626 | 0.9333 |
| 0.2408 | 0.45 | 700 | 0.1623 | 0.9067 |
| 0.2531 | 0.48 | 750 | 0.1300 | 0.9467 |
| 0.2014 | 0.51 | 800 | 0.1529 | 0.9333 |
| 0.2454 | 0.54 | 850 | 0.1365 | 0.94 |
| 0.2282 | 0.58 | 900 | 0.1447 | 0.9533 |
| 0.2554 | 0.61 | 950 | 0.1321 | 0.9467 |
| 0.24 | 0.64 | 1000 | 0.1256 | 0.9467 |
| 0.2239 | 0.67 | 1050 | 0.1290 | 0.9467 |
| 0.2865 | 0.7 | 1100 | 0.1288 | 0.9667 |
| 0.2456 | 0.74 | 1150 | 0.1299 | 0.9533 |
| 0.2407 | 0.77 | 1200 | 0.1565 | 0.9267 |
| 0.2256 | 0.8 | 1250 | 0.1262 | 0.96 |
| 0.238 | 0.83 | 1300 | 0.1599 | 0.9333 |
| 0.2151 | 0.86 | 1350 | 0.1252 | 0.9333 |
| 0.187 | 0.9 | 1400 | 0.1132 | 0.9467 |
| 0.2218 | 0.93 | 1450 | 0.1030 | 0.9533 |
| 0.2371 | 0.96 | 1500 | 0.1036 | 0.9467 |
| 0.2264 | 0.99 | 1550 | 0.1041 | 0.9467 |
| 0.2159 | 1.02 | 1600 | 0.1338 | 0.9267 |
| 0.1773 | 1.06 | 1650 | 0.1218 | 0.94 |
| 0.1381 | 1.09 | 1700 | 0.1593 | 0.94 |
| 0.1582 | 1.12 | 1750 | 0.1445 | 0.9533 |
| 0.1921 | 1.15 | 1800 | 0.1355 | 0.94 |
| 0.206 | 1.18 | 1850 | 0.1511 | 0.9467 |
| 0.1679 | 1.22 | 1900 | 0.1394 | 0.94 |
| 0.1691 | 1.25 | 1950 | 0.1403 | 0.9333 |
| 0.2301 | 1.28 | 2000 | 0.1169 | 0.9467 |
| 0.1764 | 1.31 | 2050 | 0.1507 | 0.9333 |
| 0.1772 | 1.34 | 2100 | 0.1148 | 0.96 |
| 0.1749 | 1.38 | 2150 | 0.1203 | 0.94 |
| 0.1912 | 1.41 | 2200 | 0.1037 | 0.94 |
| 0.1614 | 1.44 | 2250 | 0.1006 | 0.9533 |
| 0.1975 | 1.47 | 2300 | 0.0985 | 0.9533 |
| 0.1843 | 1.5 | 2350 | 0.0922 | 0.9533 |
| 0.1764 | 1.54 | 2400 | 0.1259 | 0.9467 |
| 0.1855 | 1.57 | 2450 | 0.1243 | 0.96 |
| 0.1272 | 1.6 | 2500 | 0.2107 | 0.9267 |
| 0.241 | 1.63 | 2550 | 0.1142 | 0.9533 |
| 0.1584 | 1.66 | 2600 | 0.1194 | 0.9467 |
| 0.1568 | 1.7 | 2650 | 0.1196 | 0.9533 |
| 0.1896 | 1.73 | 2700 | 0.1311 | 0.9533 |
| 0.143 | 1.76 | 2750 | 0.1140 | 0.9533 |
| 0.227 | 1.79 | 2800 | 0.1482 | 0.9333 |
| 0.1404 | 1.82 | 2850 | 0.1366 | 0.94 |
| 0.1865 | 1.86 | 2900 | 0.1174 | 0.94 |
| 0.1659 | 1.89 | 2950 | 0.1189 | 0.94 |
| 0.1882 | 1.92 | 3000 | 0.1144 | 0.9467 |
| 0.1403 | 1.95 | 3050 | 0.1358 | 0.94 |
| 0.2193 | 1.98 | 3100 | 0.1092 | 0.9533 |
| 0.1392 | 2.02 | 3150 | 0.1278 | 0.9267 |
| 0.1292 | 2.05 | 3200 | 0.1186 | 0.96 |
| 0.0939 | 2.08 | 3250 | 0.1183 | 0.94 |
| 0.1356 | 2.11 | 3300 | 0.1939 | 0.94 |
| 0.1175 | 2.14 | 3350 | 0.1499 | 0.94 |
| 0.1285 | 2.18 | 3400 | 0.1538 | 0.94 |
| 0.1018 | 2.21 | 3450 | 0.1796 | 0.9333 |
| 0.1342 | 2.24 | 3500 | 0.1540 | 0.94 |
| 0.17 | 2.27 | 3550 | 0.1261 | 0.94 |
| 0.1548 | 2.3 | 3600 | 0.1375 | 0.9267 |
| 0.1415 | 2.34 | 3650 | 0.1264 | 0.9333 |
| 0.1096 | 2.37 | 3700 | 0.1252 | 0.9333 |
| 0.1001 | 2.4 | 3750 | 0.1546 | 0.94 |
| 0.0934 | 2.43 | 3800 | 0.1534 | 0.94 |
| 0.1287 | 2.46 | 3850 | 0.1735 | 0.9333 |
| 0.0872 | 2.5 | 3900 | 0.1475 | 0.9467 |
| 0.0994 | 2.53 | 3950 | 0.1735 | 0.9467 |
| 0.1558 | 2.56 | 4000 | 0.1585 | 0.9467 |
| 0.1517 | 2.59 | 4050 | 0.2021 | 0.9333 |
| 0.1246 | 2.62 | 4100 | 0.1594 | 0.9267 |
| 0.1228 | 2.66 | 4150 | 0.1338 | 0.9533 |
| 0.1064 | 2.69 | 4200 | 0.1421 | 0.9467 |
| 0.1466 | 2.72 | 4250 | 0.1383 | 0.9467 |
| 0.1243 | 2.75 | 4300 | 0.1604 | 0.9533 |
| 0.1434 | 2.78 | 4350 | 0.1736 | 0.9333 |
| 0.1127 | 2.82 | 4400 | 0.1909 | 0.9267 |
| 0.0908 | 2.85 | 4450 | 0.1958 | 0.9333 |
| 0.1134 | 2.88 | 4500 | 0.1596 | 0.94 |
| 0.1345 | 2.91 | 4550 | 0.1604 | 0.9533 |
| 0.1913 | 2.94 | 4600 | 0.1852 | 0.9267 |
| 0.1382 | 2.98 | 4650 | 0.1852 | 0.9333 |
| 0.1109 | 3.01 | 4700 | 0.1905 | 0.9333 |
| 0.1144 | 3.04 | 4750 | 0.1655 | 0.94 |
| 0.074 | 3.07 | 4800 | 0.2034 | 0.9333 |
| 0.0926 | 3.1 | 4850 | 0.1929 | 0.94 |
| 0.0911 | 3.13 | 4900 | 0.1703 | 0.9333 |
| 0.0933 | 3.17 | 4950 | 0.1826 | 0.9333 |
| 0.1003 | 3.2 | 5000 | 0.1716 | 0.94 |
| 0.0889 | 3.23 | 5050 | 0.1843 | 0.9267 |
| 0.0841 | 3.26 | 5100 | 0.1670 | 0.94 |
| 0.0918 | 3.29 | 5150 | 0.1595 | 0.9467 |
| 0.0795 | 3.33 | 5200 | 0.1504 | 0.96 |
| 0.0978 | 3.36 | 5250 | 0.1317 | 0.96 |
| 0.1202 | 3.39 | 5300 | 0.1641 | 0.9533 |
| 0.0935 | 3.42 | 5350 | 0.1473 | 0.96 |
| 0.0673 | 3.45 | 5400 | 0.1684 | 0.9533 |
| 0.0729 | 3.49 | 5450 | 0.1414 | 0.9533 |
| 0.077 | 3.52 | 5500 | 0.1669 | 0.9533 |
| 0.1264 | 3.55 | 5550 | 0.1364 | 0.96 |
| 0.1282 | 3.58 | 5600 | 0.1575 | 0.9467 |
| 0.0553 | 3.61 | 5650 | 0.1440 | 0.9467 |
| 0.0953 | 3.65 | 5700 | 0.1526 | 0.9533 |
| 0.0886 | 3.68 | 5750 | 0.1633 | 0.94 |
| 0.0901 | 3.71 | 5800 | 0.1704 | 0.9467 |
| 0.0986 | 3.74 | 5850 | 0.1674 | 0.94 |
| 0.0849 | 3.77 | 5900 | 0.1989 | 0.9333 |
| 0.0815 | 3.81 | 5950 | 0.1942 | 0.94 |
| 0.0973 | 3.84 | 6000 | 0.1611 | 0.94 |
| 0.0599 | 3.87 | 6050 | 0.1807 | 0.9267 |
| 0.1068 | 3.9 | 6100 | 0.1966 | 0.94 |
| 0.0889 | 3.93 | 6150 | 0.1979 | 0.9333 |
| 0.0854 | 3.97 | 6200 | 0.2012 | 0.9333 |
| 0.1207 | 4.0 | 6250 | 0.1983 | 0.9333 |
| 0.0735 | 4.03 | 6300 | 0.1795 | 0.94 |
| 0.1148 | 4.06 | 6350 | 0.1966 | 0.94 |
| 0.0725 | 4.09 | 6400 | 0.2290 | 0.94 |
| 0.0576 | 4.13 | 6450 | 0.1936 | 0.9333 |
| 0.0477 | 4.16 | 6500 | 0.2090 | 0.9333 |
| 0.0722 | 4.19 | 6550 | 0.1878 | 0.9333 |
| 0.0936 | 4.22 | 6600 | 0.2087 | 0.94 |
| 0.0715 | 4.25 | 6650 | 0.2040 | 0.94 |
| 0.0586 | 4.29 | 6700 | 0.1862 | 0.9333 |
| 0.0548 | 4.32 | 6750 | 0.1801 | 0.9267 |
| 0.0527 | 4.35 | 6800 | 0.1912 | 0.9333 |
| 0.0813 | 4.38 | 6850 | 0.1941 | 0.9333 |
| 0.0531 | 4.41 | 6900 | 0.1932 | 0.9267 |
| 0.0606 | 4.45 | 6950 | 0.2195 | 0.94 |
| 0.1213 | 4.48 | 7000 | 0.1975 | 0.9333 |
| 0.0807 | 4.51 | 7050 | 0.1915 | 0.9333 |
| 0.076 | 4.54 | 7100 | 0.1987 | 0.9333 |
| 0.0595 | 4.57 | 7150 | 0.2052 | 0.9333 |
| 0.0832 | 4.61 | 7200 | 0.2039 | 0.9333 |
| 0.0657 | 4.64 | 7250 | 0.2186 | 0.94 |
| 0.0684 | 4.67 | 7300 | 0.2063 | 0.94 |
| 0.0429 | 4.7 | 7350 | 0.2056 | 0.94 |
| 0.0531 | 4.73 | 7400 | 0.2139 | 0.94 |
| 0.0556 | 4.77 | 7450 | 0.2153 | 0.94 |
| 0.0824 | 4.8 | 7500 | 0.2010 | 0.9333 |
| 0.039 | 4.83 | 7550 | 0.2079 | 0.94 |
| 0.068 | 4.86 | 7600 | 0.2140 | 0.94 |
| 0.065 | 4.89 | 7650 | 0.2108 | 0.94 |
| 0.0359 | 4.93 | 7700 | 0.2058 | 0.94 |
| 0.0592 | 4.96 | 7750 | 0.2029 | 0.94 |
| 0.0793 | 4.99 | 7800 | 0.2023 | 0.94 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/AR_rule_based_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: borges-gpt-collab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# borges-gpt-collab
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 70
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.2135 | 0.96 | 7 | 10.2022 |
| 10.3195 | 1.96 | 14 | 9.6343 |
| 9.9127 | 2.96 | 21 | 9.4637 |
| 9.7295 | 3.96 | 28 | 9.2993 |
| 9.527 | 4.96 | 35 | 9.0962 |
| 9.2648 | 5.96 | 42 | 8.8294 |
| 8.9309 | 6.96 | 49 | 8.5103 |
| 8.5639 | 7.96 | 56 | 8.1858 |
| 8.2034 | 8.96 | 63 | 7.8816 |
| 7.8665 | 9.96 | 70 | 7.6303 |
| 7.5715 | 10.96 | 77 | 7.4307 |
| 7.3259 | 11.96 | 84 | 7.2632 |
| 7.136 | 12.96 | 91 | 7.1494 |
| 6.9558 | 13.96 | 98 | 7.0957 |
| 6.8068 | 14.96 | 105 | 7.0199 |
| 6.6656 | 15.96 | 112 | 6.9554 |
| 6.5264 | 16.96 | 119 | 6.9324 |
| 6.3843 | 17.96 | 126 | 6.8940 |
| 6.2204 | 18.96 | 133 | 6.8799 |
| 6.0915 | 19.96 | 140 | 6.8788 |
| 5.9532 | 20.96 | 147 | 6.8719 |
| 5.8169 | 21.96 | 154 | 6.8647 |
| 5.6531 | 22.96 | 161 | 6.8865 |
| 5.5125 | 23.96 | 168 | 6.8940 |
| 5.3666 | 24.96 | 175 | 6.9248 |
| 5.2377 | 25.96 | 182 | 6.9421 |
| 5.1115 | 26.96 | 189 | 6.9631 |
| 4.9639 | 27.96 | 196 | 7.0135 |
| 4.824 | 28.96 | 203 | 7.0352 |
| 4.6886 | 29.96 | 210 | 7.0729 |
| 4.5538 | 30.96 | 217 | 7.1385 |
| 4.4126 | 31.96 | 224 | 7.1561 |
| 4.2486 | 32.96 | 231 | 7.1792 |
| 4.0955 | 33.96 | 238 | 7.2767 |
| 3.9333 | 34.96 | 245 | 7.2815 |
| 3.7914 | 35.96 | 252 | 7.3463 |
| 3.618 | 36.96 | 259 | 7.3864 |
| 3.4453 | 37.96 | 266 | 7.4394 |
| 3.2795 | 38.96 | 273 | 7.4730 |
| 3.0994 | 39.96 | 280 | 7.4880 |
| 2.9143 | 40.96 | 287 | 7.5567 |
| 2.741 | 41.96 | 294 | 7.5451 |
| 2.5698 | 42.96 | 301 | 7.5966 |
| 2.3855 | 43.96 | 308 | 7.6898 |
| 2.2059 | 44.96 | 315 | 7.6957 |
| 2.0634 | 45.96 | 322 | 7.7503 |
| 1.8719 | 46.96 | 329 | 7.8369 |
| 1.7059 | 47.96 | 336 | 7.8411 |
| 1.54 | 48.96 | 343 | 7.8316 |
| 1.3768 | 49.96 | 350 | 7.8630 |
| 1.2177 | 50.96 | 357 | 7.9360 |
| 1.0663 | 51.96 | 364 | 7.9886 |
| 0.9569 | 52.96 | 371 | 8.0187 |
| 0.8281 | 53.96 | 378 | 8.0274 |
| 0.7074 | 54.96 | 385 | 8.1010 |
| 0.6095 | 55.96 | 392 | 8.1594 |
| 0.5262 | 56.96 | 399 | 8.1010 |
| 0.4678 | 57.96 | 406 | 8.1440 |
| 0.4105 | 58.96 | 413 | 8.1638 |
| 0.3766 | 59.96 | 420 | 8.1534 |
| 0.3425 | 60.96 | 427 | 8.1980 |
| 0.321 | 61.96 | 434 | 8.2184 |
| 0.3061 | 62.96 | 441 | 8.2499 |
| 0.2852 | 63.96 | 448 | 8.1690 |
| 0.2698 | 64.96 | 455 | 8.2160 |
| 0.2628 | 65.96 | 462 | 8.2616 |
| 0.2619 | 66.96 | 469 | 8.2948 |
| 0.2544 | 67.96 | 476 | 8.3553 |
| 0.2414 | 68.96 | 483 | 8.3712 |
| 0.2177 | 69.96 | 490 | 8.3468 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+rocm5.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/AR_rule_based_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_DistilBERT_5EE
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_DistilBERT_5EE
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6581 | 0.03 | 50 | 0.5315 | 0.84 |
| 0.4321 | 0.05 | 100 | 0.2897 | 0.8933 |
| 0.298 | 0.08 | 150 | 0.3165 | 0.8667 |
| 0.2902 | 0.11 | 200 | 0.2552 | 0.9067 |
| 0.2824 | 0.13 | 250 | 0.2277 | 0.9133 |
| 0.2522 | 0.16 | 300 | 0.1998 | 0.94 |
| 0.2781 | 0.19 | 350 | 0.1933 | 0.94 |
| 0.2668 | 0.21 | 400 | 0.2316 | 0.92 |
| 0.2619 | 0.24 | 450 | 0.1968 | 0.9333 |
| 0.2446 | 0.27 | 500 | 0.1846 | 0.9467 |
| 0.2677 | 0.29 | 550 | 0.1818 | 0.94 |
| 0.2026 | 0.32 | 600 | 0.2348 | 0.9133 |
| 0.2351 | 0.35 | 650 | 0.2127 | 0.92 |
| 0.2685 | 0.37 | 700 | 0.1792 | 0.94 |
| 0.2141 | 0.4 | 750 | 0.2252 | 0.9133 |
| 0.2193 | 0.43 | 800 | 0.2131 | 0.9267 |
| 0.2456 | 0.45 | 850 | 0.2205 | 0.9133 |
| 0.2548 | 0.48 | 900 | 0.1788 | 0.94 |
| 0.2353 | 0.51 | 950 | 0.1954 | 0.9267 |
| 0.2546 | 0.53 | 1000 | 0.1815 | 0.9333 |
| 0.2583 | 0.56 | 1050 | 0.1654 | 0.9333 |
| 0.219 | 0.59 | 1100 | 0.1760 | 0.9467 |
| 0.2241 | 0.61 | 1150 | 0.2107 | 0.92 |
| 0.2201 | 0.64 | 1200 | 0.2381 | 0.8933 |
| 0.1745 | 0.67 | 1250 | 0.1944 | 0.92 |
| 0.2698 | 0.69 | 1300 | 0.1971 | 0.9267 |
| 0.214 | 0.72 | 1350 | 0.1944 | 0.9333 |
| 0.2436 | 0.75 | 1400 | 0.2079 | 0.92 |
| 0.2318 | 0.77 | 1450 | 0.2088 | 0.9333 |
| 0.2206 | 0.8 | 1500 | 0.1875 | 0.94 |
| 0.2593 | 0.83 | 1550 | 0.1797 | 0.9267 |
| 0.1908 | 0.85 | 1600 | 0.1924 | 0.9333 |
| 0.2378 | 0.88 | 1650 | 0.1649 | 0.9267 |
| 0.2332 | 0.91 | 1700 | 0.1768 | 0.94 |
| 0.2125 | 0.93 | 1750 | 0.2276 | 0.92 |
| 0.2174 | 0.96 | 1800 | 0.2035 | 0.9333 |
| 0.19 | 0.99 | 1850 | 0.1805 | 0.94 |
| 0.1515 | 1.01 | 1900 | 0.1832 | 0.94 |
| 0.1671 | 1.04 | 1950 | 0.1902 | 0.94 |
| 0.171 | 1.07 | 2000 | 0.2468 | 0.9267 |
| 0.1495 | 1.09 | 2050 | 0.2276 | 0.9267 |
| 0.1535 | 1.12 | 2100 | 0.1926 | 0.94 |
| 0.2085 | 1.15 | 2150 | 0.1878 | 0.94 |
| 0.1395 | 1.17 | 2200 | 0.1795 | 0.9467 |
| 0.1556 | 1.2 | 2250 | 0.1554 | 0.9467 |
| 0.1273 | 1.23 | 2300 | 0.1707 | 0.94 |
| 0.1873 | 1.25 | 2350 | 0.1867 | 0.9467 |
| 0.1589 | 1.28 | 2400 | 0.2089 | 0.9333 |
| 0.1426 | 1.31 | 2450 | 0.1797 | 0.9467 |
| 0.149 | 1.33 | 2500 | 0.1991 | 0.9333 |
| 0.1535 | 1.36 | 2550 | 0.2116 | 0.94 |
| 0.1671 | 1.39 | 2600 | 0.1704 | 0.9467 |
| 0.1582 | 1.41 | 2650 | 0.1843 | 0.94 |
| 0.1393 | 1.44 | 2700 | 0.1831 | 0.94 |
| 0.1474 | 1.47 | 2750 | 0.1895 | 0.94 |
| 0.203 | 1.49 | 2800 | 0.1843 | 0.9467 |
| 0.1562 | 1.52 | 2850 | 0.2060 | 0.9467 |
| 0.1886 | 1.55 | 2900 | 0.1837 | 0.94 |
| 0.1332 | 1.57 | 2950 | 0.1920 | 0.9467 |
| 0.1519 | 1.6 | 3000 | 0.1789 | 0.9533 |
| 0.1354 | 1.63 | 3050 | 0.1974 | 0.9467 |
| 0.125 | 1.65 | 3100 | 0.1890 | 0.9533 |
| 0.2044 | 1.68 | 3150 | 0.1755 | 0.9533 |
| 0.1746 | 1.71 | 3200 | 0.1607 | 0.9467 |
| 0.1981 | 1.73 | 3250 | 0.1613 | 0.9533 |
| 0.1276 | 1.76 | 3300 | 0.1825 | 0.96 |
| 0.1935 | 1.79 | 3350 | 0.1707 | 0.9533 |
| 0.1848 | 1.81 | 3400 | 0.1697 | 0.96 |
| 0.1596 | 1.84 | 3450 | 0.1581 | 0.9667 |
| 0.1797 | 1.87 | 3500 | 0.1634 | 0.96 |
| 0.1493 | 1.89 | 3550 | 0.1614 | 0.9533 |
| 0.1703 | 1.92 | 3600 | 0.1673 | 0.9467 |
| 0.1951 | 1.95 | 3650 | 0.1589 | 0.9533 |
| 0.1582 | 1.97 | 3700 | 0.1761 | 0.9467 |
| 0.1974 | 2.0 | 3750 | 0.1918 | 0.94 |
| 0.1056 | 2.03 | 3800 | 0.2063 | 0.94 |
| 0.1109 | 2.05 | 3850 | 0.2031 | 0.9467 |
| 0.113 | 2.08 | 3900 | 0.2118 | 0.9467 |
| 0.0834 | 2.11 | 3950 | 0.1974 | 0.9533 |
| 0.1434 | 2.13 | 4000 | 0.2075 | 0.9533 |
| 0.0691 | 2.16 | 4050 | 0.2178 | 0.9533 |
| 0.1144 | 2.19 | 4100 | 0.2383 | 0.9467 |
| 0.1446 | 2.21 | 4150 | 0.2207 | 0.9533 |
| 0.172 | 2.24 | 4200 | 0.2034 | 0.9467 |
| 0.1026 | 2.27 | 4250 | 0.2048 | 0.9467 |
| 0.1131 | 2.29 | 4300 | 0.2334 | 0.9467 |
| 0.121 | 2.32 | 4350 | 0.2367 | 0.9333 |
| 0.1144 | 2.35 | 4400 | 0.2313 | 0.9467 |
| 0.1089 | 2.37 | 4450 | 0.2352 | 0.9533 |
| 0.1193 | 2.4 | 4500 | 0.2440 | 0.94 |
| 0.0689 | 2.43 | 4550 | 0.2379 | 0.9333 |
| 0.1799 | 2.45 | 4600 | 0.2354 | 0.9467 |
| 0.1068 | 2.48 | 4650 | 0.2158 | 0.9533 |
| 0.0974 | 2.51 | 4700 | 0.2456 | 0.94 |
| 0.0637 | 2.53 | 4750 | 0.2191 | 0.9333 |
| 0.1125 | 2.56 | 4800 | 0.2390 | 0.9467 |
| 0.1706 | 2.59 | 4850 | 0.2407 | 0.94 |
| 0.1533 | 2.61 | 4900 | 0.2242 | 0.9533 |
| 0.1357 | 2.64 | 4950 | 0.2119 | 0.9533 |
| 0.1342 | 2.67 | 5000 | 0.2268 | 0.9467 |
| 0.0796 | 2.69 | 5050 | 0.2450 | 0.9467 |
| 0.1351 | 2.72 | 5100 | 0.2499 | 0.94 |
| 0.1285 | 2.75 | 5150 | 0.2252 | 0.94 |
| 0.1563 | 2.77 | 5200 | 0.2191 | 0.94 |
| 0.1022 | 2.8 | 5250 | 0.2256 | 0.9533 |
| 0.11 | 2.83 | 5300 | 0.2365 | 0.9467 |
| 0.0926 | 2.85 | 5350 | 0.2206 | 0.9467 |
| 0.1043 | 2.88 | 5400 | 0.2018 | 0.9533 |
| 0.1041 | 2.91 | 5450 | 0.2268 | 0.9467 |
| 0.1232 | 2.93 | 5500 | 0.2164 | 0.9467 |
| 0.1537 | 2.96 | 5550 | 0.1956 | 0.9533 |
| 0.1188 | 2.99 | 5600 | 0.2126 | 0.9467 |
| 0.0749 | 3.01 | 5650 | 0.2249 | 0.9467 |
| 0.062 | 3.04 | 5700 | 0.2254 | 0.9467 |
| 0.0755 | 3.07 | 5750 | 0.2472 | 0.94 |
| 0.0866 | 3.09 | 5800 | 0.2569 | 0.94 |
| 0.0502 | 3.12 | 5850 | 0.2481 | 0.9467 |
| 0.1158 | 3.15 | 5900 | 0.2457 | 0.94 |
| 0.0413 | 3.17 | 5950 | 0.2500 | 0.94 |
| 0.0966 | 3.2 | 6000 | 0.2851 | 0.9333 |
| 0.0613 | 3.23 | 6050 | 0.2717 | 0.9467 |
| 0.1029 | 3.25 | 6100 | 0.2714 | 0.94 |
| 0.0833 | 3.28 | 6150 | 0.2683 | 0.94 |
| 0.0928 | 3.31 | 6200 | 0.2490 | 0.9467 |
| 0.0571 | 3.33 | 6250 | 0.2575 | 0.9533 |
| 0.1252 | 3.36 | 6300 | 0.2599 | 0.9467 |
| 0.0788 | 3.39 | 6350 | 0.2522 | 0.9467 |
| 0.0862 | 3.41 | 6400 | 0.2489 | 0.9533 |
| 0.112 | 3.44 | 6450 | 0.2452 | 0.9533 |
| 0.0868 | 3.47 | 6500 | 0.2438 | 0.9533 |
| 0.0979 | 3.49 | 6550 | 0.2474 | 0.94 |
| 0.0739 | 3.52 | 6600 | 0.2508 | 0.94 |
| 0.0786 | 3.55 | 6650 | 0.2621 | 0.94 |
| 0.0872 | 3.57 | 6700 | 0.2543 | 0.9333 |
| 0.0962 | 3.6 | 6750 | 0.2347 | 0.9467 |
| 0.124 | 3.63 | 6800 | 0.2319 | 0.9533 |
| 0.0747 | 3.65 | 6850 | 0.2448 | 0.9533 |
| 0.0591 | 3.68 | 6900 | 0.2379 | 0.94 |
| 0.1049 | 3.71 | 6950 | 0.2493 | 0.9333 |
| 0.0772 | 3.73 | 7000 | 0.2429 | 0.94 |
| 0.071 | 3.76 | 7050 | 0.2558 | 0.94 |
| 0.1116 | 3.79 | 7100 | 0.2600 | 0.94 |
| 0.1199 | 3.81 | 7150 | 0.2480 | 0.94 |
| 0.0819 | 3.84 | 7200 | 0.2506 | 0.94 |
| 0.1054 | 3.87 | 7250 | 0.2431 | 0.94 |
| 0.09 | 3.89 | 7300 | 0.2582 | 0.9333 |
| 0.0936 | 3.92 | 7350 | 0.2460 | 0.94 |
| 0.0469 | 3.95 | 7400 | 0.2509 | 0.94 |
| 0.1101 | 3.97 | 7450 | 0.2545 | 0.9467 |
| 0.1077 | 4.0 | 7500 | 0.2640 | 0.9467 |
| 0.0777 | 4.03 | 7550 | 0.2709 | 0.94 |
| 0.0777 | 4.05 | 7600 | 0.2842 | 0.94 |
| 0.0847 | 4.08 | 7650 | 0.2649 | 0.94 |
| 0.0462 | 4.11 | 7700 | 0.2702 | 0.9467 |
| 0.0572 | 4.13 | 7750 | 0.2628 | 0.94 |
| 0.0435 | 4.16 | 7800 | 0.2689 | 0.9467 |
| 0.0566 | 4.19 | 7850 | 0.2727 | 0.9467 |
| 0.1149 | 4.21 | 7900 | 0.2635 | 0.9467 |
| 0.0557 | 4.24 | 7950 | 0.2665 | 0.9467 |
| 0.061 | 4.27 | 8000 | 0.2680 | 0.9467 |
| 0.0664 | 4.29 | 8050 | 0.2767 | 0.9467 |
| 0.0481 | 4.32 | 8100 | 0.2662 | 0.9467 |
| 0.0893 | 4.35 | 8150 | 0.2677 | 0.9467 |
| 0.0855 | 4.37 | 8200 | 0.2733 | 0.9467 |
| 0.0552 | 4.4 | 8250 | 0.2589 | 0.94 |
| 0.0469 | 4.43 | 8300 | 0.2733 | 0.94 |
| 0.0633 | 4.45 | 8350 | 0.2799 | 0.94 |
| 0.0629 | 4.48 | 8400 | 0.2838 | 0.94 |
| 0.0854 | 4.51 | 8450 | 0.2837 | 0.94 |
| 0.0596 | 4.53 | 8500 | 0.2808 | 0.94 |
| 0.0579 | 4.56 | 8550 | 0.2839 | 0.94 |
| 0.0508 | 4.59 | 8600 | 0.2844 | 0.94 |
| 0.0557 | 4.61 | 8650 | 0.2833 | 0.94 |
| 0.0383 | 4.64 | 8700 | 0.2878 | 0.94 |
| 0.0554 | 4.67 | 8750 | 0.2924 | 0.94 |
| 0.0681 | 4.69 | 8800 | 0.2868 | 0.94 |
| 0.065 | 4.72 | 8850 | 0.2888 | 0.94 |
| 0.0731 | 4.75 | 8900 | 0.2946 | 0.94 |
| 0.0638 | 4.77 | 8950 | 0.2886 | 0.94 |
| 0.043 | 4.8 | 9000 | 0.2867 | 0.94 |
| 0.0658 | 4.83 | 9050 | 0.2872 | 0.94 |
| 0.0249 | 4.85 | 9100 | 0.2882 | 0.94 |
| 0.0612 | 4.88 | 9150 | 0.2902 | 0.94 |
| 0.0271 | 4.91 | 9200 | 0.2890 | 0.94 |
| 0.0308 | 4.93 | 9250 | 0.2897 | 0.94 |
| 0.0896 | 4.96 | 9300 | 0.2898 | 0.94 |
| 0.1172 | 4.99 | 9350 | 0.2899 | 0.94 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: uk
model-index:
- name: flair-uk-pos
results:
- task:
name: POS
type: token-classification
metrics:
- name: POS F Score
type: f_score
value: 0.9793
widget:
- text: "Президент Володимир Зеленський пояснив, що наразі діалог із режимом Володимира путіна неможливий, адже агресор обрав курс на знищення українського народу. За словами Зеленського цей режим РФ виявляє неповагу до суверенітету і територіальної цілісності України."
license: mit
---
# flair-uk-ner
## Model description
**flair-uk-pos** is a Flair model that is ready to use for part-of-speech (upos) tagging. It is based on flair embeddings, that I've trained for Ukrainian language (available [here](https://huggingface.co/dchaplinsky/flair-uk-backward) and [here](https://huggingface.co/dchaplinsky/flair-uk-forward)) and has superior performance and a very **small size** (just 72mb!).
Results:
- F-score (micro) **0.9793**
- F-score (macro) **0.9275**
- Accuracy **0.9793**
| By class: | precision | recall | f1-score | support |
|--------------|-----------|--------|----------|---------|
| NOUN | 0.9857 | 0.9851 | 0.9854 | 4549 |
| PUNCT | 0.9984 | 1.0000 | 0.9992 | 3097 |
| ADJ | 0.9772 | 0.9852 | 0.9812 | 1959 |
| ADP | 0.9956 | 0.9968 | 0.9962 | 1584 |
| VERB | 0.9891 | 0.9910 | 0.9900 | 1552 |
| ADV | 0.9630 | 0.9118 | 0.9367 | 714 |
| CCONJ | 0.9685 | 0.9746 | 0.9715 | 630 |
| PROPN | 0.9279 | 0.9472 | 0.9375 | 625 |
| DET | 0.9729 | 0.9698 | 0.9713 | 629 |
| PRON | 0.9706 | 0.9631 | 0.9669 | 515 |
| PART | 0.9235 | 0.8693 | 0.8956 | 375 |
| NUM | 0.9722 | 0.9804 | 0.9763 | 357 |
| SCONJ | 0.8768 | 0.9577 | 0.9154 | 260 |
| AUX | 0.8906 | 0.9500 | 0.9194 | 120 |
| X | 0.9833 | 0.9593 | 0.9712 | 123 |
| SYM | 1.0000 | 0.7059 | 0.8276 | 17 |
| INTJ | 0.5556 | 0.5000 | 0.5263 | 10 |
| accuracy | | | 0.9793 | 17116 |
| macro avg | 0.9383 | 0.9204 | 0.9275 | 17116 |
| weighted avg | 0.9794 | 0.9793 | 0.9792 | 17116 |
The model was fine-tuned on the [Ukrainian (UD) corpus](https://universaldependencies.org/treebanks/uk_iu/index.html), released by the [non-profit organization Institute for Ukrainian](https://mova.institute).
Training code is also available [here](https://github.com/lang-uk/flair-pos).
## [Usage demo](https://github.com/egorsmkv/flair-nlp-uk/blob/main/part_of_speech.py)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
from pprint import pprint
tagger = SequenceTagger.load("dchaplinsky/flair-uk-pos")
sentence = Sentence("Я люблю Україну. Моє імʼя Марія Шевченко, я навчаюся в Київській політехніці.")
tagger.predict(sentence)
print(sentence)
print('---')
print('The following POS tags are found:')
pos_items = []
for label in sentence.get_labels():
all_labels = []
keys = label.data_point.annotation_layers.keys()
for key in keys:
all_labels.extend(
[
{'label': label.value, 'score': round(label.score, 4)}
for label in label.data_point.get_labels(key)
if label.data_point == label
]
)
pos_items.append({
'text': label.data_point.text,
'all_labels': all_labels,
})
pprint(pos_items)
# Result:
"""
Sentence: "Я люблю Україну . Моє імʼя Марія Шевченко , я навчаюся в Київській політехніці ." → ["Я"/PRON, "люблю"/VERB, "Україну"/PROPN, "."/PUNCT, "Моє"/DET, "імʼя"/NOUN, "Марія"/PROPN, "Шевченко"/PROPN, ","/PUNCT, "я"/PRON, "навчаюся"/VERB, "в"/ADP, "Київській"/ADJ, "політехніці"/NOUN, "."/PUNCT]
---
The following POS tags are found:
[{'all_labels': [{'label': 'PRON', 'score': 1.0}], 'text': 'Я'},
{'all_labels': [{'label': 'VERB', 'score': 1.0}], 'text': 'люблю'},
{'all_labels': [{'label': 'PROPN', 'score': 1.0}], 'text': 'Україну'},
{'all_labels': [{'label': 'PUNCT', 'score': 1.0}], 'text': '.'},
{'all_labels': [{'label': 'DET', 'score': 0.9999}], 'text': 'Моє'},
{'all_labels': [{'label': 'NOUN', 'score': 1.0}], 'text': 'імʼя'},
{'all_labels': [{'label': 'PROPN', 'score': 1.0}], 'text': 'Марія'},
{'all_labels': [{'label': 'PROPN', 'score': 1.0}], 'text': 'Шевченко'},
{'all_labels': [{'label': 'PUNCT', 'score': 1.0}], 'text': ','},
{'all_labels': [{'label': 'PRON', 'score': 1.0}], 'text': 'я'},
{'all_labels': [{'label': 'VERB', 'score': 1.0}], 'text': 'навчаюся'},
{'all_labels': [{'label': 'ADP', 'score': 1.0}], 'text': 'в'},
{'all_labels': [{'label': 'ADJ', 'score': 1.0}], 'text': 'Київській'},
{'all_labels': [{'label': 'NOUN', 'score': 1.0}], 'text': 'політехніці'},
{'all_labels': [{'label': 'PUNCT', 'score': 1.0}], 'text': '.'}]
"""
```
Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2022 |
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: unknown
inference: false
tags:
- mlconsole
- tabular-regression
library_name: mlconsole
metrics:
- mae
- loss
datasets:
- house_price_prediction
model-index:
- name: house_price_prediction_dev
results:
- task:
type: tabular-regression
name: tabular-regression
dataset:
type: house_price_prediction
name: house_price_prediction
metrics:
- type: mae
name: Mean absolute error
value: 7.064809322357178
- type: loss
name: Model loss
value: 98.9962387084961
---
# regression model trained on "house_price_prediction"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/house_price_prediction_dev) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: unknown
inference: false
tags:
- mlconsole
- tabular-regression
library_name: mlconsole
metrics:
- mae
- loss
datasets:
- house_price_prediction
model-index:
- name: house_price_prediction_ser2
results:
- task:
type: tabular-regression
name: tabular-regression
dataset:
type: house_price_prediction
name: house_price_prediction
metrics:
- type: mae
name: Mean absolute error
value: 5.011783599853516
- type: loss
name: Model loss
value: 43.01755905151367
---
# regression model trained on "house_price_prediction"
🤖 [Load and use this model](https://mlconsole.com/model/hf/halflings/house_price_prediction_ser2) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
AnonymousSub/AR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Guizmus/BloodborneDiffusion/resolve/main/bloodbornestyle_showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: true
---
# Bloodborne Diffusion
<p>
<img src="https://huggingface.co/Guizmus/BloodborneDiffusion/resolve/main/bloodbornestyle_showcase.jpg"/><br/>
This is a Dreamboothed Stable Diffusion model trained on the Bloodborne series Style.<br/>
The total dataset is made of 100 pictures, and the training has been done on runawayml 1.5 and the new VAE, with 12k steps (poly LR1e-6).<br/>
The token "Bloodborne Style" will bring in the new concept.<br/>
The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7 .
</p>
[CKPT download link](https://huggingface.co/Guizmus/Bloodborne/resolve/main/BloodborneStyle-v1-1.ckpt)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Guizmus/BloodborneDiffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a red moon, Bloodborne Style"
image = pipe(prompt).images[0]
image.save("./BloodborneStyle.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/dallinmackay/Cats-Musical-diffusion/resolve/main/cats_preview1.jpg"
tags:
- stable-diffusion
- text-to-image
---
### Cats the Musical Diffusion
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from the film **_Cats (2019)_**. Use the token **_ctsmscl_** at the BEGINNING of your prompts to use the style (e.g., "ctsmscl, thanos"). This model works best with the Euler sampler (NOT Euler_a). It will take some experimenting to get good results, my sample images are heavily cherry-picked this time (~10% success rate for likeness of real people). Use (prompt) [weighting] to balance the style with the character.
[CKPT download link](https://huggingface.co/dallinmackay/Cats-Musical-diffusion/resolve/main/Cats-Musical-Style-ctsmscl.ckpt)
The model you didn't ask for, and didn't know you needed. Trained on the horribly uncanny musical, Cats, which was based on an even more horrible previous version, which was based on a horrible live stage musical. It has endured all opposition, and who am I to stand in it's way? This model is inevitable.
--
**Cat people rendered with this model:**
_prompt and settings used: **(ctsmscl), [person]** | **Steps: 45, Sampler: Euler, CFG scale: 5**_

--
**Roughly the entire main cast of Game of Thrones as cats:**

--
This model was trained with Dreambooth, using TheLastBen colab notebook
--
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
--
[](https://www.patreon.com/dallinmackay) |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- pt
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small PT with Common Voice 11
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: pt, split: test'
metrics:
- name: Wer
type: wer
value: 14.380154024398555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small PT with Common Voice 11
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3487
- Wer: 14.3802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1202 | 0.88 | 1000 | 0.2225 | 15.5847 |
| 0.1024 | 1.76 | 2000 | 0.2160 | 15.0651 |
| 0.0832 | 2.64 | 3000 | 0.2259 | 15.0923 |
| 0.0081 | 3.51 | 4000 | 0.2519 | 14.7345 |
| 0.0387 | 4.39 | 5000 | 0.2718 | 14.7311 |
| 0.0039 | 5.27 | 6000 | 0.3031 | 14.5914 |
| 0.001 | 6.15 | 7000 | 0.3238 | 14.5710 |
| 0.0007 | 7.03 | 8000 | 0.3285 | 14.5113 |
| 0.0009 | 7.91 | 9000 | 0.3467 | 14.3580 |
| 0.0008 | 8.79 | 10000 | 0.3487 | 14.3802 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: artistic-2.0
---
Room
Hats
Sport suits |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Access to model skg/na-models is restricted and you are not in the authorized list. Visit https://huggingface.co/skg/na-models to ask for access. |
AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- conversational
---
# Sheldon DialoGPT Model |
AnonymousSub/AR_rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | Room with a big wardrobe which contains lots of sport suits, make up and complements Room with a big wardrobe which contains lots of sport suits, make up and complements |
AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- pt
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny PT with Common Voice 11
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: pt, split: test'
metrics:
- name: Wer
type: wer
value: 33.24473522796974
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny PT with Common Voice 11
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5205
- Wer: 33.2447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 16000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.3154 | 0.44 | 1000 | 0.4987 | 36.2196 |
| 0.3252 | 0.88 | 2000 | 0.4586 | 33.6213 |
| 0.1989 | 1.32 | 3000 | 0.4457 | 32.7455 |
| 0.3112 | 1.76 | 4000 | 0.4356 | 31.4097 |
| 0.1329 | 2.2 | 5000 | 0.4348 | 31.1559 |
| 0.1193 | 2.64 | 6000 | 0.4343 | 31.4046 |
| 0.0723 | 3.07 | 7000 | 0.4424 | 31.5869 |
| 0.0698 | 3.51 | 8000 | 0.4497 | 32.0827 |
| 0.0865 | 3.95 | 9000 | 0.4497 | 31.0945 |
| 0.0522 | 4.39 | 10000 | 0.4716 | 32.2190 |
| 0.0542 | 4.83 | 11000 | 0.4761 | 32.6944 |
| 0.061 | 5.27 | 12000 | 0.4983 | 32.0691 |
| 0.0459 | 5.71 | 13000 | 0.4985 | 32.4968 |
| 0.0338 | 6.15 | 14000 | 0.5123 | 33.3129 |
| 0.0492 | 6.59 | 15000 | 0.5217 | 33.2686 |
| 0.0194 | 7.03 | 16000 | 0.5205 | 33.2447 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/EManuals_BERT_copy | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | Access to model ThrinathMphasis/mickey-mouse is restricted and you are not in the authorized list. Visit https://huggingface.co/ThrinathMphasis/mickey-mouse to ask for access. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.