modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Appolo/TestModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- nl
- en
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/ccmatrix
tags:
- t5
- translation
- seq2seq
pipeline_tag: translation
widget:
- text: "It is a painful and tragic spectacle that rises before me: I have drawn back the curtain from the rottenness of man. This word, in my mouth, is at least free from one suspicion: that it involves a moral accusation against humanity."
- text: "Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible. His camouflage was perfect, since the waiting room had a disorderly and demoralized air, too. Chairs and ashtrays had been moved away from the walls. The floor was paved with spattered dropcloths."
license: apache-2.0
---
# t5-small-24L-ccmatrix-multi
A [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset.
Evaluation metrics of this model are listed in the **Translation models** section below.
You can use this model directly with a pipeline for text translation:
```python
model_name = "yhavinga/t5-small-24L-ccmatrix-multi"
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
device_num = 0 if torch.cuda.is_available() else -1
device = "cpu" if device_num < 0 else f"cuda:{device_num}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
params = {"max_length": 128, "num_beams": 4, "early_stopping": True}
en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num)
print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""",
**params)[0]['translation_text'])
nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num)
print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""",
**params)[0]['translation_text'])
```
This **t5 eff** model has **249M** parameters.
It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**,
with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens).
Pre-training evaluation loss and accuracy are **1,18** and **0,74**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been fine-tuned for summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

Evaluation was run on fine-tuned models trained with the following settings:
| | Summarization | Translation |
|---------------:|------------------|-------------------|
| Dataset | CNN Dailymail NL | CCMatrix en -> nl |
| #train samples | 50K | 50K |
| Optimizer | Adam | Adam |
| learning rate | 0.001 | 0.0005 |
| source length | 1024 | 128 |
| target length | 142 | 128 |
|label smoothing | 0.05 | 0.1 |
| #eval samples | 1000 | 1000 |
Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores
below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation
are not saved, since they were trained for comparison of pre-trained models only.
The numbers for summarization are the Rouge scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
The models below have been evaluated for English to Dutch translation.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
The numbers reported are the Bleu scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
ArBert/albert-base-v2-finetuned-ner-gmm-twitter | [
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- all
license: apache-2.0
tags:
- fleurs-asr
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_fleurs_asr_western_european
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_fleurs_asr_western_european
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - FLEURS.ALL dataset.
It achieves the following results on the evaluation set:
- Cer: 0.2484
- Cer Ast Es: 0.1598
- Cer Bs Ba: 0.1749
- Cer Ca Es: 0.1655
- Cer Cy Gb: 0.2280
- Cer Da Dk: 0.3616
- Cer De De: 0.1287
- Cer El Gr: 0.6020
- Cer En Us: 0.1938
- Cer Es 419: 0.1288
- Cer Fi Fi: 0.2050
- Cer Fr Fr: 0.1811
- Cer Ga Ie: 0.4474
- Cer Gl Es: 0.1324
- Cer Hr Hr: 0.1555
- Cer Hu Hu: 0.3911
- Cer Is Is: 0.4646
- Cer It It: 0.1283
- Cer Kea Cv: 0.1818
- Cer Lb Lu: 0.2594
- Cer Mt Mt: 0.3628
- Cer Nb No: 0.2254
- Cer Nl Nl: 0.1790
- Cer Oci Fr: 0.2159
- Cer Pt Br: 0.2275
- Cer Sv Se: 0.3092
- Loss: 1.3089
- Loss Ast Es: 0.7715
- Loss Bs Ba: 0.7378
- Loss Ca Es: 0.7868
- Loss Cy Gb: 1.1441
- Loss Da Dk: 1.9130
- Loss De De: 0.5391
- Loss El Gr: 3.4904
- Loss En Us: 0.9632
- Loss Es 419: 0.6186
- Loss Fi Fi: 0.8953
- Loss Fr Fr: 0.9076
- Loss Ga Ie: 3.0217
- Loss Gl Es: 0.5788
- Loss Hr Hr: 0.6462
- Loss Hu Hu: 1.9029
- Loss Is Is: 2.6551
- Loss It It: 0.6052
- Loss Kea Cv: 0.9107
- Loss Lb Lu: 1.3705
- Loss Mt Mt: 2.3651
- Loss Nb No: 1.1518
- Loss Nl Nl: 0.8490
- Loss Oci Fr: 1.1421
- Loss Pt Br: 1.1641
- Loss Sv Se: 1.5910
- Wer: 0.6451
- Wer Ast Es: 0.4654
- Wer Bs Ba: 0.5443
- Wer Ca Es: 0.4979
- Wer Cy Gb: 0.5962
- Wer Da Dk: 0.8455
- Wer De De: 0.4221
- Wer El Gr: 0.9805
- Wer En Us: 0.4556
- Wer Es 419: 0.3928
- Wer Fi Fi: 0.8116
- Wer Fr Fr: 0.4690
- Wer Ga Ie: 0.8519
- Wer Gl Es: 0.4245
- Wer Hr Hr: 0.4895
- Wer Hu Hu: 0.9099
- Wer Is Is: 0.9960
- Wer It It: 0.4415
- Wer Kea Cv: 0.5202
- Wer Lb Lu: 0.7225
- Wer Mt Mt: 1.0096
- Wer Nb No: 0.6541
- Wer Nl Nl: 0.5257
- Wer Oci Fr: 0.5770
- Wer Pt Br: 0.6685
- Wer Sv Se: 0.8546
- Predict Samples: 20043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.1411 | 0.49 | 500 | 3.1673 | 1.0 | 1.0 |
| 0.6397 | 0.97 | 1000 | 0.9039 | 0.7171 | 0.2862 |
| 0.4033 | 1.46 | 1500 | 0.8914 | 0.6862 | 0.2763 |
| 0.3473 | 1.94 | 2000 | 0.8017 | 0.6505 | 0.2536 |
| 0.3143 | 2.43 | 2500 | 0.8568 | 0.6566 | 0.2627 |
| 0.3004 | 2.91 | 3000 | 0.8898 | 0.6640 | 0.2686 |
| 0.282 | 3.4 | 3500 | 0.8489 | 0.6637 | 0.2571 |
| 0.2489 | 3.88 | 4000 | 0.8955 | 0.6744 | 0.2691 |
| 0.1706 | 4.37 | 4500 | 0.9190 | 0.6788 | 0.2688 |
| 0.3336 | 4.85 | 5000 | 0.8915 | 0.6594 | 0.2572 |
| 0.1426 | 5.34 | 5500 | 0.9501 | 0.6784 | 0.2686 |
| 0.2301 | 5.83 | 6000 | 1.0217 | 0.6719 | 0.2735 |
| 0.1325 | 6.31 | 6500 | 0.9578 | 0.6691 | 0.2655 |
| 0.1145 | 6.8 | 7000 | 0.9129 | 0.6680 | 0.2593 |
| 0.1202 | 7.28 | 7500 | 0.9646 | 0.6749 | 0.2619 |
| 0.143 | 7.77 | 8000 | 0.9200 | 0.6554 | 0.2554 |
| 0.1012 | 8.25 | 8500 | 0.9553 | 0.6787 | 0.2628 |
| 0.1018 | 8.74 | 9000 | 0.9455 | 0.6445 | 0.2511 |
| 0.1148 | 9.22 | 9500 | 1.0206 | 0.6725 | 0.2629 |
| 0.0794 | 9.71 | 10000 | 0.9305 | 0.6547 | 0.2526 |
| 0.2891 | 10.19 | 10500 | 1.0424 | 0.6709 | 0.2570 |
| 0.1665 | 10.68 | 11000 | 0.9760 | 0.6596 | 0.2507 |
| 0.1956 | 11.17 | 11500 | 0.9549 | 0.6340 | 0.2440 |
| 0.0828 | 11.65 | 12000 | 0.9598 | 0.6403 | 0.2460 |
| 0.059 | 12.14 | 12500 | 0.9972 | 0.6574 | 0.2531 |
| 0.0505 | 12.62 | 13000 | 0.9836 | 0.6534 | 0.2525 |
| 0.0336 | 13.11 | 13500 | 1.0619 | 0.6564 | 0.2519 |
| 0.0435 | 13.59 | 14000 | 1.0844 | 0.6480 | 0.2543 |
| 0.0216 | 14.08 | 14500 | 1.1084 | 0.6512 | 0.2521 |
| 0.0265 | 14.56 | 15000 | 1.1152 | 0.6607 | 0.2563 |
| 0.0975 | 15.05 | 15500 | 1.1060 | 0.6456 | 0.2471 |
| 0.1396 | 15.53 | 16000 | 1.1100 | 0.6337 | 0.2418 |
| 0.0701 | 16.02 | 16500 | 1.1731 | 0.6309 | 0.2415 |
| 0.1171 | 16.5 | 17000 | 1.1302 | 0.6315 | 0.2396 |
| 0.0778 | 16.99 | 17500 | 1.1485 | 0.6379 | 0.2447 |
| 0.0642 | 17.48 | 18000 | 1.2009 | 0.6400 | 0.2464 |
| 0.0322 | 17.96 | 18500 | 1.2028 | 0.6357 | 0.2425 |
| 0.031 | 18.45 | 19000 | 1.2381 | 0.6285 | 0.2416 |
| 0.0579 | 18.93 | 19500 | 1.2299 | 0.6265 | 0.2409 |
| 0.0628 | 19.42 | 20000 | 1.2582 | 0.6277 | 0.2395 |
| 0.074 | 19.9 | 20500 | 1.2572 | 0.6278 | 0.2394 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
ArBert/roberta-base-finetuned-ner-agglo-twitter | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5827
- Wer: 0.4147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4314 | 7.04 | 500 | 0.5453 | 0.4922 |
| 0.2357 | 14.08 | 1000 | 0.5573 | 0.4376 |
| 0.1283 | 21.13 | 1500 | 0.5827 | 0.4147 |
| 0.1169 | 28.17 | 2000 | 0.5827 | 0.4147 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ArBert/roberta-base-finetuned-ner-agglo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-qmsum-meeting-summarization
results: []
datasets:
- yawnick/QMSum
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qmsum-meeting-summarization
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the QMSum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3354
- Rouge1: 39.5539
- Rouge2: 12.1134
- Rougel: 23.9163
- Rougelsum: 36.0299
- Gen Len: 117.225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 5.5573 | 2.17 | 100 | 5.4074 | 23.6282 | 4.1122 | 14.584 | 21.2263 | 84.75 |
| 5.4721 | 4.35 | 200 | 5.2899 | 24.61 | 4.272 | 15.2096 | 22.2997 | 87.2 |
| 5.3407 | 6.52 | 300 | 5.1360 | 25.8272 | 4.3314 | 15.9926 | 23.3416 | 87.95 |
| 5.1527 | 8.7 | 400 | 4.9751 | 27.7207 | 5.31 | 16.7055 | 24.8357 | 88.35 |
| 5.0058 | 10.87 | 500 | 4.8372 | 30.1847 | 6.8615 | 18.934 | 27.2424 | 89.95 |
| 4.8807 | 13.04 | 600 | 4.7488 | 33.1208 | 9.1784 | 20.655 | 30.1198 | 101.3 |
| 4.7931 | 15.22 | 700 | 4.6891 | 33.2266 | 8.4253 | 20.0334 | 30.4093 | 108.925 |
| 4.7272 | 17.39 | 800 | 4.6467 | 35.0475 | 9.326 | 21.0655 | 31.8413 | 111.7 |
| 4.6904 | 19.57 | 900 | 4.6102 | 34.869 | 9.6046 | 21.395 | 32.4346 | 115.05 |
| 4.6547 | 21.74 | 1000 | 4.5829 | 36.3392 | 10.9936 | 22.1524 | 33.6863 | 119.875 |
| 4.594 | 23.91 | 1100 | 4.5602 | 35.9717 | 10.3827 | 21.6118 | 32.8302 | 119.5 |
| 4.5714 | 26.09 | 1200 | 4.5424 | 36.3656 | 10.6282 | 22.2187 | 33.6494 | 118.0 |
| 4.542 | 28.26 | 1300 | 4.5256 | 36.7386 | 10.615 | 22.2487 | 34.1927 | 115.675 |
| 4.5092 | 30.43 | 1400 | 4.5116 | 37.1597 | 10.7751 | 22.6747 | 34.396 | 118.55 |
| 4.5031 | 32.61 | 1500 | 4.4981 | 37.6108 | 10.9732 | 22.8342 | 34.6833 | 117.125 |
| 4.4682 | 34.78 | 1600 | 4.4875 | 37.5057 | 11.1328 | 22.8973 | 34.7114 | 117.65 |
| 4.4387 | 36.96 | 1700 | 4.4775 | 38.1278 | 11.3597 | 23.1307 | 35.1869 | 115.65 |
| 4.4085 | 39.13 | 1800 | 4.4682 | 37.9578 | 11.4355 | 23.1149 | 35.4961 | 119.6 |
| 4.4166 | 41.3 | 1900 | 4.4592 | 38.1467 | 11.3208 | 23.045 | 35.0824 | 120.05 |
| 4.3971 | 43.48 | 2000 | 4.4517 | 37.9922 | 11.5071 | 23.3983 | 34.6918 | 114.425 |
| 4.3638 | 45.65 | 2100 | 4.4438 | 38.1666 | 11.4985 | 23.5518 | 35.1484 | 117.2 |
| 4.3522 | 47.83 | 2200 | 4.4377 | 37.7572 | 11.3984 | 23.4437 | 35.0453 | 113.725 |
| 4.3398 | 50.0 | 2300 | 4.4320 | 38.5833 | 11.4575 | 23.6411 | 35.3437 | 116.125 |
| 4.3341 | 52.17 | 2400 | 4.4247 | 38.2705 | 12.0374 | 23.5807 | 34.9985 | 110.8 |
| 4.3024 | 54.35 | 2500 | 4.4201 | 39.0206 | 12.2041 | 23.4394 | 35.6291 | 114.5 |
| 4.3117 | 56.52 | 2600 | 4.4147 | 38.6555 | 12.1079 | 23.5655 | 35.5287 | 111.325 |
| 4.2659 | 58.7 | 2700 | 4.4107 | 39.2235 | 12.025 | 23.934 | 36.2243 | 113.3 |
| 4.2946 | 60.87 | 2800 | 4.4055 | 39.0301 | 12.1833 | 23.8999 | 36.0487 | 110.325 |
| 4.2431 | 63.04 | 2900 | 4.4009 | 39.0498 | 12.3215 | 23.9686 | 36.0277 | 112.775 |
| 4.2439 | 65.22 | 3000 | 4.3968 | 38.8786 | 12.0985 | 23.8308 | 35.8575 | 115.175 |
| 4.2244 | 67.39 | 3100 | 4.3922 | 38.7614 | 12.1721 | 23.7736 | 35.6744 | 113.55 |
| 4.235 | 69.57 | 3200 | 4.3895 | 38.6858 | 11.3994 | 23.6392 | 35.3456 | 114.125 |
| 4.2064 | 71.74 | 3300 | 4.3859 | 39.0258 | 12.0435 | 24.2528 | 35.8378 | 113.5 |
| 4.1934 | 73.91 | 3400 | 4.3835 | 39.0467 | 11.5556 | 23.6704 | 35.5643 | 111.5 |
| 4.1859 | 76.09 | 3500 | 4.3800 | 38.776 | 11.729 | 24.1254 | 35.3894 | 112.9 |
| 4.1762 | 78.26 | 3600 | 4.3775 | 38.9465 | 11.9112 | 23.8123 | 35.5453 | 114.125 |
| 4.1848 | 80.43 | 3700 | 4.3744 | 39.2783 | 11.6539 | 23.8236 | 35.8465 | 110.225 |
| 4.1386 | 82.61 | 3800 | 4.3730 | 38.8894 | 11.4784 | 23.7534 | 35.5464 | 113.15 |
| 4.1483 | 84.78 | 3900 | 4.3710 | 39.2734 | 12.0285 | 23.8171 | 35.6884 | 115.95 |
| 4.1428 | 86.96 | 4000 | 4.3688 | 39.6134 | 12.0616 | 23.7454 | 36.0363 | 113.375 |
| 4.133 | 89.13 | 4100 | 4.3663 | 38.935 | 11.4781 | 23.8766 | 35.4061 | 114.15 |
| 4.1211 | 91.3 | 4200 | 4.3648 | 39.1488 | 11.8399 | 23.9935 | 35.3107 | 113.975 |
| 4.1076 | 93.48 | 4300 | 4.3650 | 38.9764 | 11.9963 | 23.4994 | 35.7214 | 116.25 |
| 4.121 | 95.65 | 4400 | 4.3597 | 38.9418 | 11.8416 | 24.0272 | 35.6597 | 111.325 |
| 4.0936 | 97.83 | 4500 | 4.3602 | 39.266 | 12.5616 | 24.2046 | 36.1883 | 114.275 |
| 4.0841 | 100.0 | 4600 | 4.3588 | 39.4659 | 12.2132 | 24.0521 | 36.249 | 115.475 |
| 4.0768 | 102.17 | 4700 | 4.3578 | 39.4167 | 12.0587 | 24.025 | 35.9668 | 114.375 |
| 4.0711 | 104.35 | 4800 | 4.3541 | 39.6943 | 12.1095 | 24.0925 | 36.3496 | 115.65 |
| 4.072 | 106.52 | 4900 | 4.3539 | 40.2024 | 12.4618 | 24.2863 | 36.8844 | 113.475 |
| 4.0646 | 108.7 | 5000 | 4.3540 | 39.4299 | 11.8085 | 23.686 | 36.0454 | 113.975 |
| 4.0508 | 110.87 | 5100 | 4.3517 | 39.9217 | 11.9379 | 24.2299 | 36.6362 | 115.5 |
| 4.0549 | 113.04 | 5200 | 4.3498 | 40.3496 | 12.2558 | 24.0271 | 36.9715 | 112.5 |
| 4.0428 | 115.22 | 5300 | 4.3497 | 40.1349 | 12.0628 | 24.0622 | 36.9169 | 113.95 |
| 4.0391 | 117.39 | 5400 | 4.3480 | 40.1209 | 12.3587 | 24.3456 | 36.8411 | 116.025 |
| 4.0195 | 119.57 | 5500 | 4.3474 | 39.5209 | 12.1325 | 24.2622 | 36.4357 | 111.975 |
| 4.0054 | 121.74 | 5600 | 4.3468 | 40.2885 | 12.4453 | 24.2373 | 36.932 | 117.375 |
| 4.0286 | 123.91 | 5700 | 4.3465 | 39.3943 | 11.8399 | 23.9786 | 35.991 | 116.475 |
| 4.005 | 126.09 | 5800 | 4.3459 | 38.7442 | 11.7408 | 23.8948 | 35.3673 | 117.625 |
| 3.991 | 128.26 | 5900 | 4.3444 | 39.6276 | 12.1549 | 23.9542 | 36.3832 | 115.675 |
| 4.0137 | 130.43 | 6000 | 4.3427 | 39.8331 | 12.2687 | 24.187 | 36.6144 | 115.475 |
| 3.9755 | 132.61 | 6100 | 4.3438 | 39.1907 | 12.1033 | 24.2339 | 35.9126 | 114.525 |
| 4.0134 | 134.78 | 6200 | 4.3422 | 39.4298 | 11.862 | 24.0847 | 35.5744 | 115.025 |
| 3.9935 | 136.96 | 6300 | 4.3416 | 39.4158 | 11.6968 | 23.9636 | 35.8155 | 114.35 |
| 3.9606 | 139.13 | 6400 | 4.3409 | 39.1239 | 11.7046 | 23.6846 | 36.0431 | 114.775 |
| 3.9834 | 141.3 | 6500 | 4.3404 | 39.6375 | 12.2746 | 24.2636 | 36.1425 | 116.175 |
| 3.9687 | 143.48 | 6600 | 4.3409 | 39.1494 | 12.1404 | 24.0778 | 35.4932 | 118.05 |
| 3.9861 | 145.65 | 6700 | 4.3394 | 39.6258 | 12.2497 | 23.9662 | 36.4054 | 116.8 |
| 3.9755 | 147.83 | 6800 | 4.3400 | 39.3121 | 11.7831 | 23.6584 | 35.9636 | 118.125 |
| 3.9591 | 150.0 | 6900 | 4.3390 | 39.6957 | 11.9406 | 24.0599 | 36.3021 | 114.9 |
| 3.9599 | 152.17 | 7000 | 4.3389 | 39.4271 | 11.4159 | 24.1437 | 35.9056 | 115.8 |
| 3.9456 | 154.35 | 7100 | 4.3384 | 39.4862 | 11.726 | 23.883 | 35.9839 | 116.375 |
| 3.9341 | 156.52 | 7200 | 4.3386 | 39.6915 | 11.8028 | 24.346 | 36.406 | 116.425 |
| 3.9648 | 158.7 | 7300 | 4.3383 | 39.9311 | 11.7135 | 23.985 | 36.2617 | 118.075 |
| 3.9486 | 160.87 | 7400 | 4.3372 | 39.8375 | 12.0014 | 24.0969 | 36.5902 | 118.8 |
| 3.9533 | 163.04 | 7500 | 4.3371 | 40.2678 | 12.3137 | 24.1916 | 37.1632 | 118.075 |
| 3.9344 | 165.22 | 7600 | 4.3369 | 39.5588 | 11.6805 | 24.1474 | 36.2021 | 114.875 |
| 3.9314 | 167.39 | 7700 | 4.3368 | 39.8649 | 11.9824 | 24.5459 | 36.3921 | 113.65 |
| 3.9558 | 169.57 | 7800 | 4.3363 | 39.8428 | 12.0892 | 24.0175 | 36.67 | 112.7 |
| 3.928 | 171.74 | 7900 | 4.3364 | 39.2281 | 11.8456 | 23.7212 | 36.2005 | 113.95 |
| 3.9351 | 173.91 | 8000 | 4.3363 | 39.9798 | 12.4387 | 23.7687 | 36.6472 | 115.45 |
| 3.9326 | 176.09 | 8100 | 4.3363 | 39.9772 | 12.1193 | 24.1518 | 36.5791 | 117.4 |
| 3.9387 | 178.26 | 8200 | 4.3363 | 39.8629 | 12.1719 | 23.9446 | 36.345 | 115.075 |
| 3.9204 | 180.43 | 8300 | 4.3358 | 39.9738 | 12.3072 | 23.8641 | 36.4802 | 116.3 |
| 3.9418 | 182.61 | 8400 | 4.3357 | 40.1451 | 12.4144 | 24.1553 | 36.4251 | 116.025 |
| 3.9289 | 184.78 | 8500 | 4.3357 | 39.7241 | 12.0543 | 24.0752 | 36.0847 | 115.8 |
| 3.9176 | 186.96 | 8600 | 4.3358 | 39.7969 | 12.0967 | 24.123 | 36.2664 | 118.6 |
| 3.9097 | 189.13 | 8700 | 4.3356 | 39.4096 | 11.9872 | 24.0609 | 35.8662 | 117.2 |
| 3.938 | 191.3 | 8800 | 4.3354 | 39.4695 | 11.9343 | 24.0295 | 35.9372 | 117.025 |
| 3.9239 | 193.48 | 8900 | 4.3352 | 39.3231 | 12.0965 | 23.9131 | 35.9555 | 117.275 |
| 3.91 | 195.65 | 9000 | 4.3354 | 39.5932 | 12.1808 | 23.9233 | 36.0864 | 116.925 |
| 3.9234 | 197.83 | 9100 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
| 3.9263 | 200.0 | 9200 | 4.3354 | 39.5539 | 12.1134 | 23.9163 | 36.0299 | 117.225 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AriakimTaiyo/DialoGPT-medium-Kumiko | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-27T14:40:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-French123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-French123
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AriakimTaiyo/DialoGPT-revised-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- EAST/autotrain-data-Rule
co2_eq_emissions: 0.0025078722090032795
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 793324440
- CO2 Emissions (in grams): 0.0025078722090032795
## Validation Metrics
- Loss: 0.31105440855026245
- Accuracy: 0.9473684210526315
- Precision: 0.9
- Recall: 1.0
- AUC: 0.9444444444444445
- F1: 0.9473684210526316
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/EAST/autotrain-Rule-793324440
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("EAST/autotrain-Rule-793324440", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
AriakimTaiyo/DialoGPT-small-Rikka | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9288 | 1.0 | 2319 | 1.7729 |
| 1.8208 | 2.0 | 4638 | 1.7398 |
| 1.7888 | 3.0 | 6957 | 1.7523 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AriakimTaiyo/kumiko | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-27T15:00:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: mbart-large-cc25-finetuned-en-to-ko2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-en-to-ko2
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Aries/T5_question_answering | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 5 | 2022-04-27T15:03:43Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- faisalahmad2/autotrain-data-nlp-text-summarization-by-faisal
co2_eq_emissions: 27.26671996544415
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 793224456
- CO2 Emissions (in grams): 27.26671996544415
## Validation Metrics
- Loss: 1.5189369916915894
- Rouge1: 38.7852
- Rouge2: 17.0785
- RougeL: 32.1082
- RougeLsum: 32.1103
- Gen Len: 18.7332
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/faisalahmad2/autotrain-nlp-text-summarization-by-faisal-793224456
``` |
Arina/Erine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1203 | 1.0 | 766 | 2.8510 |
| 2.9255 | 2.0 | 1532 | 2.8106 |
| 2.8669 | 3.0 | 2298 | 2.7515 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ArjunKadya/HuggingFace | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | pytorch version of [jplu/tf-xlm-r-ner-40-lang](https://huggingface.co/jplu/tf-xlm-r-ner-40-lang)
|
Arnold/common_voiceha | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
widget:
- text: "I like you. </s></s> I love you."
---
## text-classification
---
|
ArpanZS/search_model | [
"joblib"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-27T17:12:33Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Aruden/DialoGPT-medium-harrypotterall | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.6617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 |
| 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ashagi/Ashvx | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: ArOCRv4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArOCRv4
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5811
- Cer: 0.1249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 3.103 | 1.18 | 1000 | 8.0852 | 11.5974 |
| 1.2535 | 2.36 | 2000 | 2.0400 | 0.4904 |
| 0.5682 | 3.55 | 3000 | 1.9336 | 0.2145 |
| 0.3038 | 4.73 | 4000 | 1.5811 | 0.1249 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.1
- Datasets 2.1.0
- Tokenizers 0.11.6
|
Ashkanmh/bert-base-parsbert-uncased-finetuned | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9203318889648883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2230
- Accuracy: 0.9205
- F1: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3224 | 0.9055 | 0.9034 |
| No log | 2.0 | 500 | 0.2230 | 0.9205 | 0.9203 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Asuramaru/DialoGPT-small-rintohsaka | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
widget:
- text: "translate Bash to SQL: find -name \"feel good story\" -mtime -1"
example_title: Last day
- text: "translate Bash to SQL: find -name \"show me sports stories\" -mtime -1 -team \"Red Sox\""
example_title: Last day with filter
- text: "translate Bash to SQL: find -name \"breaking news\" -summary"
example_title: Summary
- text: "translate Bash to SQL: find -name \"breaking news\" -translate fr"
example_title: Translate to French
inference:
parameters:
max_length: 512
license: apache-2.0
library_name: txtai
---
# T5-small finedtuned to generate txtai SQL
[T5 small](https://huggingface.co/t5-small) fine-tuned to generate [txtai](https://github.com/neuml/txtai) SQL. This model takes [Bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) like commands and builds txtai-compatible SQL statements.
```
find -name "feel good story" -mtime -1
find -name "show me sports stories" -mtime -1 -team \"Red Sox\"
find -name "breaking news" -summary
find -name "breaking news" -translate fr
```
## Custom query syntax
This model is an example of creating a custom query syntax that can be translated into SQL txtai can understand. Any query syntax can be created. This one supports Bash-like commands but a similar strategy can be deployed to support other languages. Natural language can be translated to functions, query clauses, column selection and more.
See [t5-small-txtsql](https://huggingface.co/NeuML/t5-small-txtsql) for a model that translates natural language statements into txtai SQL.
## Model training
This model was trained using scripts that can be [found here](https://github.com/neuml/txtai/tree/master/models/bashsql).
Steps to train:
```bash
python generate.py bashsql.csv
python train.py bashsql.csv t5-small-bashsql
```
|
At3ee/wav2vec2-base-timit-demo-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Atampy26/GPT-Glacier | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Atchuth/DialoGPT-small-MBOT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2299 | 1.0 | 5533 | 1.1673 |
| 0.9564 | 2.0 | 11066 | 1.1223 |
| 0.7572 | 3.0 | 16599 | 1.1617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ateeb/FullEmotionDetector | [
"pytorch",
"funnel",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"FunnelForSequenceClassification"
],
"model_type": "funnel",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.9641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Rouge1: 11.9641
- Rouge2: 10.5245
- Rougel: 11.5825
- Rougelsum: 11.842
- Gen Len: 18.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2238 | 1.0 | 34008 | 0.1788 | 11.9641 | 10.5245 | 11.5825 | 11.842 | 18.9838 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ateeb/QA | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-finetuned-ADEs_model_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-ADEs_model_2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2580
- Precision: 0.5407
- Recall: 0.6311
- F1: 0.5824
- Accuracy: 0.8897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7461 | 1.0 | 640 | 0.3393 | 0.4247 | 0.5095 | 0.4633 | 0.8648 |
| 0.3632 | 2.0 | 1280 | 0.2822 | 0.4934 | 0.6035 | 0.5429 | 0.8819 |
| 0.3102 | 3.0 | 1920 | 0.2663 | 0.5218 | 0.6112 | 0.5630 | 0.8879 |
| 0.2806 | 4.0 | 2560 | 0.2604 | 0.5337 | 0.6311 | 0.5783 | 0.8890 |
| 0.2772 | 5.0 | 3200 | 0.2580 | 0.5407 | 0.6311 | 0.5824 | 0.8897 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Augustvember/WokkaBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/afraidofwasps-dril-senn_spud/1654636210975/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1387151448203358209/HKNuKY7L_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1182478458552832000/xqEwluRJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & Will Sennett & Boots, 'with the fur'</div>
<div style="text-align: center; font-size: 14px;">@afraidofwasps-dril-senn_spud</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & Will Sennett & Boots, 'with the fur'.
| Data | wint | Will Sennett | Boots, 'with the fur' |
| --- | --- | --- | --- |
| Tweets downloaded | 3230 | 3228 | 3217 |
| Retweets | 487 | 312 | 504 |
| Short tweets | 297 | 622 | 434 |
| Tweets kept | 2446 | 2294 | 2279 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156iladp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @afraidofwasps-dril-senn_spud's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/6g2dktc9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/afraidofwasps-dril-senn_spud')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Augustvember/WokkaBot5 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.0191
- eval_runtime: 291.8551
- eval_samples_per_second: 37.032
- eval_steps_per_second: 2.316
- epoch: 3.0
- step: 16620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Augustvember/WokkaBot9 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Clickbait3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait3
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.05 | 50 | 0.0373 |
| No log | 0.1 | 100 | 0.0320 |
| No log | 0.15 | 150 | 0.0295 |
| No log | 0.21 | 200 | 0.0302 |
| No log | 0.26 | 250 | 0.0331 |
| No log | 0.31 | 300 | 0.0280 |
| No log | 0.36 | 350 | 0.0277 |
| No log | 0.41 | 400 | 0.0316 |
| No log | 0.46 | 450 | 0.0277 |
| 0.0343 | 0.51 | 500 | 0.0276 |
| 0.0343 | 0.56 | 550 | 0.0282 |
| 0.0343 | 0.62 | 600 | 0.0280 |
| 0.0343 | 0.67 | 650 | 0.0271 |
| 0.0343 | 0.72 | 700 | 0.0264 |
| 0.0343 | 0.77 | 750 | 0.0265 |
| 0.0343 | 0.82 | 800 | 0.0260 |
| 0.0343 | 0.87 | 850 | 0.0263 |
| 0.0343 | 0.92 | 900 | 0.0259 |
| 0.0343 | 0.97 | 950 | 0.0277 |
| 0.0278 | 1.03 | 1000 | 0.0281 |
| 0.0278 | 1.08 | 1050 | 0.0294 |
| 0.0278 | 1.13 | 1100 | 0.0256 |
| 0.0278 | 1.18 | 1150 | 0.0258 |
| 0.0278 | 1.23 | 1200 | 0.0254 |
| 0.0278 | 1.28 | 1250 | 0.0265 |
| 0.0278 | 1.33 | 1300 | 0.0252 |
| 0.0278 | 1.38 | 1350 | 0.0251 |
| 0.0278 | 1.44 | 1400 | 0.0264 |
| 0.0278 | 1.49 | 1450 | 0.0262 |
| 0.023 | 1.54 | 1500 | 0.0272 |
| 0.023 | 1.59 | 1550 | 0.0278 |
| 0.023 | 1.64 | 1600 | 0.0255 |
| 0.023 | 1.69 | 1650 | 0.0258 |
| 0.023 | 1.74 | 1700 | 0.0262 |
| 0.023 | 1.79 | 1750 | 0.0250 |
| 0.023 | 1.85 | 1800 | 0.0253 |
| 0.023 | 1.9 | 1850 | 0.0271 |
| 0.023 | 1.95 | 1900 | 0.0248 |
| 0.023 | 2.0 | 1950 | 0.0258 |
| 0.0224 | 2.05 | 2000 | 0.0252 |
| 0.0224 | 2.1 | 2050 | 0.0259 |
| 0.0224 | 2.15 | 2100 | 0.0254 |
| 0.0224 | 2.21 | 2150 | 0.0260 |
| 0.0224 | 2.26 | 2200 | 0.0254 |
| 0.0224 | 2.31 | 2250 | 0.0266 |
| 0.0224 | 2.36 | 2300 | 0.0258 |
| 0.0224 | 2.41 | 2350 | 0.0258 |
| 0.0224 | 2.46 | 2400 | 0.0256 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Augustvember/wokka2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-04-28T02:50:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: Clickbait5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clickbait5
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.04 | 50 | 0.0258 |
| No log | 0.08 | 100 | 0.0269 |
| No log | 0.12 | 150 | 0.0259 |
| No log | 0.16 | 200 | 0.0260 |
| No log | 0.21 | 250 | 0.0267 |
| No log | 0.25 | 300 | 0.0276 |
| No log | 0.29 | 350 | 0.0284 |
| No log | 0.33 | 400 | 0.0270 |
| No log | 0.37 | 450 | 0.0269 |
| 0.0195 | 0.41 | 500 | 0.0260 |
| 0.0195 | 0.45 | 550 | 0.0284 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Augustvember/wokkabottest2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language: en
license: mit
pipeline_tag: text-generation
---
# AID-Neo-125M
## Model description
This model was inspired by -- and finetuned on the same dataset of -- [KoboldAI's GPT-Neo-125M-AID (Mia) model](https://huggingface.co/KoboldAI/GPT-Neo-125M-AID): the AI Dungeon dataset (`text_adventures.txt`). This was to fix a possible oversight in the original model, which was trained with [an unfortunate bug](https://github.com/EricFillion/happy-transformer/issues/283). You could technically consider it a "retraining" of the same model using different software.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0 |
Ayham/distilbert_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | # What is SamSum Bot?
This is a model fine-tuned on the [SamSum dataset](https://huggingface.co/datasets/samsum).
However, instead of training the system to summarize conversations, the model is trained to predict a conversation given a summary.
The prompt needs to be in the following form
```python
A partial summary of the conversation is:
{summary}
With the dialogue being:
{dialogue}
```
where *{summary}* is a text as in
```python
John went out to buy groceries. He meets Jane on the way and they talk about the weather.
```
and the *{dialogue}* needs to be structured with speaking lines preceded by the speaking character
```python
John: Oh hi Jane.
Jane: Nice to see you?
John: The weather looks nice today
Jane: [PREDICTION]
```
The system is based on the GPTJ-6B by EleutherAI, [quantized by Hivemind](https://huggingface.co/hivemind/gpt-j-6B-8bit). It has been fine-tuned according to the [LoRa method](https://arxiv.org/abs/2106.09685).
A simple back-end is available in [this repo](https://github.com/fractalego/samsum-bot), where the model is served using Torchserve.
A terminal-like front-end interface is available [here](https://github.com/fractalego/samsumbot_client).
This interface is the one used in my website [http://fractalego.io](http://fractalego.io).
|
Ayham/roberta_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
---
# OFA-large
## Introduction
This is the **large** version of OFA pretrained model. OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework.
The directory includes 4 files, namely `config.json` which consists of model configuration, `vocab.json` and `merge.txt` for our OFA tokenizer, and lastly `pytorch_model.bin` which consists of model weights. There is no need to worry about the mismatch between Fairseq and transformers, since we have addressed the issue yet.
## How to use
To use it in transformers, please refer to https://github.com/OFA-Sys/OFA/tree/feature/add_transformers. Install the transformers and download the models as shown below.
```bash
git clone --single-branch --branch feature/add_transformers https://github.com/OFA-Sys/OFA.git
pip install OFA/transformers/
git clone https://huggingface.co/OFA-Sys/OFA-large
```
After, refer the path to OFA-large to `ckpt_dir`, and prepare an image for the testing example below. Also, ensure that you have pillow and torchvision in your environment.
```python
>>> from PIL import Image
>>> from torchvision import transforms
>>> from transformers import OFATokenizer, OFAModel
>>> from generate import sequence_generator
>>> mean, std = [0.5, 0.5, 0.5], [0.5, 0.5, 0.5]
>>> resolution = 480
>>> patch_resize_transform = transforms.Compose([
lambda image: image.convert("RGB"),
transforms.Resize((resolution, resolution), interpolation=Image.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)
])
>>> tokenizer = OFATokenizer.from_pretrained(ckpt_dir)
>>> txt = " what does the image describe?"
>>> inputs = tokenizer([txt], return_tensors="pt").input_ids
>>> img = Image.open(path_to_image)
>>> patch_img = patch_resize_transform(img).unsqueeze(0)
# using the generator of fairseq version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=True)
>>> generator = sequence_generator.SequenceGenerator(
tokenizer=tokenizer,
beam_size=5,
max_len_b=16,
min_len=0,
no_repeat_ngram_size=3,
)
>>> data = {}
>>> data["net_input"] = {"input_ids": inputs, 'patch_images': patch_img, 'patch_masks':torch.tensor([True])}
>>> gen_output = generator.generate([model], data)
>>> gen = [gen_output[i][0]["tokens"] for i in range(len(gen_output))]
# using the generator of huggingface version
>>> model = OFAModel.from_pretrained(ckpt_dir, use_cache=False)
>>> gen = model.generate(inputs, patch_images=patch_img, num_beams=5, no_repeat_ngram_size=3)
>>> print(tokenizer.batch_decode(gen, skip_special_tokens=True))
```
|
Ayham/xlnet_roberta_new_summarization_cnn_dailymail | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/1800.m4a
- example_title: example 2
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020578b.flac.wav
- example_title: example 3
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr/raw/main/00020570a.flac.wav
---
# wav2vec2-large-slavic-parlaspeech-hr
This model for Croatian ASR is based on the [facebook/wav2vec2-large-slavic-voxpopuli-v2 model](https://huggingface.co/facebook/wav2vec2-large-slavic-voxpopuli-v2) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
Nikola Ljubešić, Danijel Koržinek, Peter Rupnik, Ivo-Pavao Jazbec. ParlaSpeech-HR -- a freely available ASR dataset for Croatian bootstrapped from the ParlaMint corpus. http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/pdf/2022.parlaclariniii-1.16.pdf
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0311|0.0921|
|test|0.0222|0.0679|
There are multiple models available, and in terms of CER and WER, the best-performing model is [wav2vec2-large-slavic-parlaspeech-hr-lm](https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr-lm).
## Usage in `transformers`
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained(
"classla/wav2vec2-large-slavic-parlaspeech-hr")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-large-slavic-parlaspeech-hr")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.to(device)
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
# retrieve logits
logits = model.to(device)(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0]).lower()
# transcription: 'veliki broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 | |
Aymene/opus-mt-en-ro-finetuned-en-to-ro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7696078431372549
- name: Recall
type: recall
value: 0.839572192513369
- name: F1
type: f1
value: 0.8030690537084398
- name: Accuracy
type: accuracy
value: 0.8847040737893928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [hfl/chinese-bert-wwm-ext](https://huggingface.co/hfl/chinese-bert-wwm-ext) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7372
- Precision: 0.7696
- Recall: 0.8396
- F1: 0.8031
- Accuracy: 0.8847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 2 | 1.9496 | 0.0 | 0.0 | 0.0 | 0.4889 |
| No log | 2.0 | 4 | 1.6137 | 0.0 | 0.0 | 0.0 | 0.4919 |
| No log | 3.0 | 6 | 1.3906 | 0.0 | 0.0 | 0.0 | 0.5650 |
| No log | 4.0 | 8 | 1.2273 | 0.0652 | 0.0481 | 0.0554 | 0.6856 |
| No log | 5.0 | 10 | 1.0565 | 0.2051 | 0.1711 | 0.1866 | 0.7125 |
| No log | 6.0 | 12 | 0.9150 | 0.5094 | 0.4332 | 0.4682 | 0.7540 |
| No log | 7.0 | 14 | 0.8051 | 0.5988 | 0.5187 | 0.5559 | 0.7679 |
| No log | 8.0 | 16 | 0.7151 | 0.6707 | 0.5989 | 0.6328 | 0.7763 |
| No log | 9.0 | 18 | 0.6334 | 0.6685 | 0.6364 | 0.6521 | 0.8086 |
| No log | 10.0 | 20 | 0.5693 | 0.6957 | 0.6845 | 0.6900 | 0.8201 |
| No log | 11.0 | 22 | 0.5192 | 0.7166 | 0.7166 | 0.7166 | 0.8363 |
| No log | 12.0 | 24 | 0.4736 | 0.7135 | 0.7326 | 0.7230 | 0.8524 |
| No log | 13.0 | 26 | 0.4448 | 0.6938 | 0.7754 | 0.7323 | 0.8555 |
| No log | 14.0 | 28 | 0.4280 | 0.7177 | 0.8021 | 0.7576 | 0.8586 |
| No log | 15.0 | 30 | 0.4179 | 0.7588 | 0.8075 | 0.7824 | 0.8663 |
| No log | 16.0 | 32 | 0.4214 | 0.7356 | 0.8182 | 0.7747 | 0.8593 |
| No log | 17.0 | 34 | 0.4070 | 0.7391 | 0.8182 | 0.7766 | 0.8616 |
| No log | 18.0 | 36 | 0.4112 | 0.7586 | 0.8235 | 0.7897 | 0.8724 |
| No log | 19.0 | 38 | 0.4530 | 0.7330 | 0.8075 | 0.7684 | 0.8693 |
| No log | 20.0 | 40 | 0.4719 | 0.7766 | 0.8182 | 0.7969 | 0.8732 |
| No log | 21.0 | 42 | 0.4886 | 0.7260 | 0.8075 | 0.7646 | 0.8632 |
| No log | 22.0 | 44 | 0.5007 | 0.7217 | 0.8182 | 0.7669 | 0.8701 |
| No log | 23.0 | 46 | 0.5169 | 0.7321 | 0.8182 | 0.7727 | 0.8762 |
| No log | 24.0 | 48 | 0.5531 | 0.7238 | 0.8128 | 0.7657 | 0.8724 |
| No log | 25.0 | 50 | 0.5895 | 0.7311 | 0.8289 | 0.7769 | 0.8655 |
| No log | 26.0 | 52 | 0.5482 | 0.7330 | 0.8075 | 0.7684 | 0.8778 |
| No log | 27.0 | 54 | 0.5361 | 0.7488 | 0.8128 | 0.7795 | 0.8832 |
| No log | 28.0 | 56 | 0.5378 | 0.7427 | 0.8182 | 0.7786 | 0.8847 |
| No log | 29.0 | 58 | 0.5543 | 0.7371 | 0.8396 | 0.7850 | 0.8824 |
| No log | 30.0 | 60 | 0.5564 | 0.7585 | 0.8396 | 0.7970 | 0.8839 |
| No log | 31.0 | 62 | 0.5829 | 0.7235 | 0.8396 | 0.7772 | 0.8724 |
| No log | 32.0 | 64 | 0.5974 | 0.7269 | 0.8396 | 0.7792 | 0.8716 |
| No log | 33.0 | 66 | 0.5750 | 0.7610 | 0.8342 | 0.7959 | 0.8839 |
| No log | 34.0 | 68 | 0.5887 | 0.7723 | 0.8342 | 0.8021 | 0.8878 |
| No log | 35.0 | 70 | 0.6219 | 0.7441 | 0.8396 | 0.7889 | 0.8747 |
| No log | 36.0 | 72 | 0.6676 | 0.7269 | 0.8396 | 0.7792 | 0.8632 |
| No log | 37.0 | 74 | 0.6517 | 0.7452 | 0.8289 | 0.7848 | 0.8693 |
| No log | 38.0 | 76 | 0.6346 | 0.7828 | 0.8289 | 0.8052 | 0.8862 |
| No log | 39.0 | 78 | 0.6239 | 0.7839 | 0.8342 | 0.8083 | 0.8855 |
| No log | 40.0 | 80 | 0.6360 | 0.7277 | 0.8289 | 0.775 | 0.8762 |
| No log | 41.0 | 82 | 0.6645 | 0.7336 | 0.8396 | 0.7830 | 0.8701 |
| No log | 42.0 | 84 | 0.6611 | 0.7406 | 0.8396 | 0.7870 | 0.8747 |
| No log | 43.0 | 86 | 0.6707 | 0.7488 | 0.8289 | 0.7868 | 0.8762 |
| No log | 44.0 | 88 | 0.6901 | 0.7277 | 0.8289 | 0.775 | 0.8709 |
| No log | 45.0 | 90 | 0.6911 | 0.7393 | 0.8342 | 0.7839 | 0.8709 |
| No log | 46.0 | 92 | 0.6540 | 0.7761 | 0.8342 | 0.8041 | 0.8878 |
| No log | 47.0 | 94 | 0.6381 | 0.7761 | 0.8342 | 0.8041 | 0.8916 |
| No log | 48.0 | 96 | 0.6285 | 0.7745 | 0.8449 | 0.8082 | 0.8885 |
| No log | 49.0 | 98 | 0.6449 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 50.0 | 100 | 0.6809 | 0.7442 | 0.8556 | 0.7960 | 0.8732 |
| No log | 51.0 | 102 | 0.6898 | 0.7395 | 0.8503 | 0.7910 | 0.8716 |
| No log | 52.0 | 104 | 0.6897 | 0.75 | 0.8503 | 0.7970 | 0.8762 |
| No log | 53.0 | 106 | 0.6714 | 0.7656 | 0.8556 | 0.8081 | 0.8855 |
| No log | 54.0 | 108 | 0.6612 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 55.0 | 110 | 0.6583 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 56.0 | 112 | 0.6648 | 0.7692 | 0.8556 | 0.8101 | 0.8855 |
| No log | 57.0 | 114 | 0.6757 | 0.7656 | 0.8556 | 0.8081 | 0.8832 |
| No log | 58.0 | 116 | 0.6803 | 0.7656 | 0.8556 | 0.8081 | 0.8839 |
| No log | 59.0 | 118 | 0.6834 | 0.7692 | 0.8556 | 0.8101 | 0.8862 |
| No log | 60.0 | 120 | 0.6889 | 0.7833 | 0.8503 | 0.8154 | 0.8878 |
| No log | 61.0 | 122 | 0.6963 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 62.0 | 124 | 0.7057 | 0.7772 | 0.8396 | 0.8072 | 0.8862 |
| No log | 63.0 | 126 | 0.7212 | 0.7910 | 0.8503 | 0.8196 | 0.8862 |
| No log | 64.0 | 128 | 0.7334 | 0.7833 | 0.8503 | 0.8154 | 0.8824 |
| No log | 65.0 | 130 | 0.7398 | 0.7833 | 0.8503 | 0.8154 | 0.8801 |
| No log | 66.0 | 132 | 0.7400 | 0.7833 | 0.8503 | 0.8154 | 0.8809 |
| No log | 67.0 | 134 | 0.7345 | 0.7783 | 0.8449 | 0.8103 | 0.8855 |
| No log | 68.0 | 136 | 0.7270 | 0.79 | 0.8449 | 0.8165 | 0.8870 |
| No log | 69.0 | 138 | 0.7245 | 0.7839 | 0.8342 | 0.8083 | 0.8862 |
| No log | 70.0 | 140 | 0.7260 | 0.7868 | 0.8289 | 0.8073 | 0.8847 |
| No log | 71.0 | 142 | 0.7275 | 0.7817 | 0.8235 | 0.8021 | 0.8839 |
| No log | 72.0 | 144 | 0.7283 | 0.7778 | 0.8235 | 0.8000 | 0.8832 |
| No log | 73.0 | 146 | 0.7296 | 0.78 | 0.8342 | 0.8062 | 0.8847 |
| No log | 74.0 | 148 | 0.7344 | 0.7734 | 0.8396 | 0.8051 | 0.8832 |
| No log | 75.0 | 150 | 0.7314 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 76.0 | 152 | 0.7299 | 0.7794 | 0.8503 | 0.8133 | 0.8832 |
| No log | 77.0 | 154 | 0.7282 | 0.7794 | 0.8503 | 0.8133 | 0.8839 |
| No log | 78.0 | 156 | 0.7252 | 0.7783 | 0.8449 | 0.8103 | 0.8839 |
| No log | 79.0 | 158 | 0.7216 | 0.7756 | 0.8503 | 0.8112 | 0.8855 |
| No log | 80.0 | 160 | 0.7194 | 0.7756 | 0.8503 | 0.8112 | 0.8870 |
| No log | 81.0 | 162 | 0.7191 | 0.7756 | 0.8503 | 0.8112 | 0.8878 |
| No log | 82.0 | 164 | 0.7201 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 83.0 | 166 | 0.7211 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 84.0 | 168 | 0.7222 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 85.0 | 170 | 0.7220 | 0.7696 | 0.8396 | 0.8031 | 0.8862 |
| No log | 86.0 | 172 | 0.7239 | 0.7734 | 0.8396 | 0.8051 | 0.8870 |
| No log | 87.0 | 174 | 0.7291 | 0.7772 | 0.8396 | 0.8072 | 0.8847 |
| No log | 88.0 | 176 | 0.7344 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 89.0 | 178 | 0.7373 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 90.0 | 180 | 0.7391 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 91.0 | 182 | 0.7403 | 0.7745 | 0.8449 | 0.8082 | 0.8824 |
| No log | 92.0 | 184 | 0.7412 | 0.7745 | 0.8449 | 0.8082 | 0.8832 |
| No log | 93.0 | 186 | 0.7417 | 0.7707 | 0.8449 | 0.8061 | 0.8832 |
| No log | 94.0 | 188 | 0.7402 | 0.7745 | 0.8449 | 0.8082 | 0.8839 |
| No log | 95.0 | 190 | 0.7389 | 0.7745 | 0.8449 | 0.8082 | 0.8847 |
| No log | 96.0 | 192 | 0.7381 | 0.7696 | 0.8396 | 0.8031 | 0.8839 |
| No log | 97.0 | 194 | 0.7377 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 98.0 | 196 | 0.7374 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 99.0 | 198 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
| No log | 100.0 | 200 | 0.7372 | 0.7696 | 0.8396 | 0.8031 | 0.8847 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ayu/Shiriro | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-28T11:31:35Z | ---
language: hr
datasets:
- parlaspeech-hr
tags:
- audio
- automatic-speech-recognition
- parlaspeech
widget:
- example_title: example 1
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/1800.m4a
- example_title: example 2
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/00020578b.flac.wav
- example_title: example 3
src: https://huggingface.co/classla/wav2vec2-xls-r-parlaspeech-hr-lm/raw/main/00020570a.flac.wav
---
# wav2vec2-xls-r-parlaspeech-hr-lm
This model for Croatian ASR is based on the [facebook/wav2vec2-xls-r-300m model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) and was fine-tuned with 300 hours of recordings and transcripts from the ASR Croatian parliament dataset [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494).
If you use this model, please cite the following paper:
```
@inproceedings{ljubevsic2022parlaspeech,
title={ParlaSpeech-HR-a Freely Available ASR Dataset for Croatian Bootstrapped from the ParlaMint Corpus},
author={Ljube{\v{s}}i{\'c}, Nikola and Kor{\v{z}}inek, Danijel and Rupnik, Peter and Jazbec, Ivo-Pavao},
booktitle={Proceedings of the Workshop ParlaCLARIN III within the 13th Language Resources and Evaluation Conference},
pages={111--116},
year={2022},
url={http://www.lrec-conf.org/proceedings/lrec2022/workshops/ParlaCLARINIII/pdf/2022.parlaclariniii-1.16.pdf}
}
```
## Metrics
Evaluation is performed on the dev and test portions of the [ParlaSpeech-HR v1.0](http://hdl.handle.net/11356/1494) dataset.
|split|CER|WER|
|---|---|---|
|dev|0.0448|0.1129|
|test|0.0363|0.0985|
There are multiple models available, and in terms of CER and WER, the best-performing model is [wav2vec2-large-slavic-parlaspeech-hr-lm](https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr-lm).
## Usage in `transformers`
Tested with `transformers==4.18.0`, `torch==1.11.0`, and `SoundFile==0.10.3.post1`.
```python
from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC
import soundfile as sf
import torch
import os
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# load model and tokenizer
processor = Wav2Vec2ProcessorWithLM.from_pretrained(
"classla/wav2vec2-xls-r-parlaspeech-hr-lm")
model = Wav2Vec2ForCTC.from_pretrained("classla/wav2vec2-xls-r-parlaspeech-hr-lm")
# download the example wav files:
os.system("wget https://huggingface.co/classla/wav2vec2-large-slavic-parlaspeech-hr/raw/main/00020570a.flac.wav")
# read the wav file
speech, sample_rate = sf.read("00020570a.flac.wav")
input_values = processor(speech, sampling_rate=sample_rate, return_tensors="pt").input_values.cuda()
inputs = processor(speech, sampling_rate=sample_rate, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.numpy()).text[0]
# remove the raw wav file
os.system("rm 00020570a.flac.wav")
transcription
# transcription: 'velik broj poslovnih subjekata posluje sa minusom velik dio'
```
## Training hyperparameters
In fine-tuning, the following arguments were used:
| arg | value |
|-------------------------------|-------|
| `per_device_train_batch_size` | 16 |
| `gradient_accumulation_steps` | 4 |
| `num_train_epochs` | 8 |
| `learning_rate` | 3e-4 |
| `warmup_steps` | 500 | |
AyushPJ/ai-club-inductions-21-nlp-ALBERT | [
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-04-28T11:37:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.414 | 1.0 | 10 | 4.7780 |
| 4.8623 | 2.0 | 20 | 4.7064 |
| 4.6726 | 3.0 | 30 | 4.5646 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
BE/demo-sentiment2021 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- efficient
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"
- text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!"
- text: "nn capisco xke tt i ragazzi lo fanno"
- text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-informal-to-formal
results:
- task:
type: formality-style-transfer
name: "Informal-to-formal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.430
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.221
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.408
name: "Avg. Test RougeL"
- type: bertscore
value: 0.630
name: "Avg. Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Informal-to-formal Style Transfer 🧐
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
BOON/electra_qa | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 803.50 +/- 283.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
BSC-LT/roberta-base-biomedical-clinical-es | [
"pytorch",
"roberta",
"fill-mask",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"transformers",
"biomedical",
"clinical",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- efficient
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.354
name: "Test Rouge1"
- type: rouge2
value: 0.172
name: "Test Rouge2"
- type: rougeL
value: 0.278
name: "Test RougeL"
- type: bertscore
value: 0.410
name: "Avg. Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for News Summarization ✂️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-efficient-small-el32-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
BSC-LT/roberta-base-biomedical-es | [
"pytorch",
"roberta",
"fill-mask",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"transformers",
"biomedical",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 161 | null | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- squad_it
- text2text-question-answering
- text2text-generation
widget:
- text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?"
- text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?"
- text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?"
- text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?"
metrics:
- f1
- exact-match
model-index:
- name: it5-efficient-small-el32-question-answering
results:
- task:
type: question-answering
name: "Question Answering"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: f1
value: 0.747
name: "Test F1"
- type: exact-match
value: 0.645
name: "Test Exact Match"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for Question Answering ⁉️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qa = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-answering')
qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?")
>>> [{"generated_text": "ultimo massimo glaciale"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-answering")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-answering")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-base-bne-capitel-ner-plus | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-04-28T14:12:07Z | ---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- question-generation
- squad_it
- text2text-generation
widget:
- text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia"
- text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu"
- text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan"
- text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-question-generation
results:
- task:
type: question-generation
name: "Question generation"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: rouge1
value: 0.382
name: "Test Rouge1"
- type: rouge2
value: 0.201
name: "Test Rouge2"
- type: rougeL
value: 0.357
name: "Test RougeL"
- type: bertscore
value: 0.517
name: "Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Question Generation 💭 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-generation')
qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia")
>>> [{"generated_text": "Per chi è stato redatto il referto medico?"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-base-bne-capitel-ner | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- efficient
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-ilgiornale-to-repubblica
results:
- task:
type: headline-style-transfer-ilgiornale-to-repubblica
name: "Headline style transfer (Il Giornale to Repubblica)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.286
name: "Test Rouge1"
- type: rouge2
value: 0.099
name: "Test Rouge2"
- type: rougeL
value: 0.253
name: "Test RougeL"
- type: bertscore
value: 0.422
name: "Test BERTScore"
- type: headline-headline-consistency-classifier
value: 0.836
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.763
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
g2r = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-ilgiornale-to-repubblica')
g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-base-bne-capitel-pos | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"pos",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
language:
- it
license: apache-2.0
datasets:
- wits
tags:
- italian
- sequence-to-sequence
- wikipedia
- summarization
- efficient
- wits
widget:
- text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati."
- text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. "
- text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. "
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-wiki-summarization
results:
- task:
type: wiki-summarization
name: "Wikipedia Summarization"
dataset:
type: wits
name: "WITS"
metrics:
- type: rouge1
value: 0.346
name: "Test Rouge1"
- type: rouge2
value: 0.196
name: "Test Rouge2"
- type: rougeL
value: 0.314
name: "Test RougeL"
- type: bertscore
value: 0.513
name: "Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Wikipedia Summarization 📑 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-wiki-summarization')
hg("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. "
- text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo.")
>>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-wiki-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
BSC-LT/roberta-base-bne-sqac | [
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- efficient
- newspaper
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-repubblica-to-ilgiornale
results:
- task:
type: headline-style-transfer-repubblica-to-ilgiornale
name: "Headline style transfer (Repubblica to Il Giornale)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.269
name: "Test Rouge1"
- type: rouge2
value: 0.087
name: "Test Rouge2"
- type: rougeL
value: 0.235
name: "Test RougeL"
- type: bertscore
value: 0.395
name: "Test BERTScore"
- type: headline-headline-consistency-classifier
value: 0.808
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.810
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
BSC-LT/roberta-base-bne | [
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 594 | null | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-ShreeGanesh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-ShreeGanesh
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BW/TEST | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Babelscape/rebel-large | [
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"en",
"dataset:Babelscape/rebel-dataset",
"transformers",
"seq2seq",
"relation-extraction",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"has_space"
] | text2text-generation | {
"architectures": [
"BartForConditionalGeneration"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9,458 | 2022-04-28T15:28:08Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Babelscape/wikineural-multilingual-ner | [
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"de",
"en",
"es",
"fr",
"it",
"nl",
"pl",
"pt",
"ru",
"multilingual",
"dataset:Babelscape/wikineural",
"transformers",
"named-entity-recognition",
"sequence-tagger-model",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 41,608 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Babysittingyoda/DialoGPT-small-familyguy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-04-28T15:28:27Z | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Backedman/DialoGPT-small-Anika | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Badr/model1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git). We use pre layer norm, which is not supported by HuggingFace. To use our model, go to our [github repo](https://github.com/princeton-nlp/DinkyTrain.git), download our code, and import the RoBERTa class from `huggingface/modeling_roberta_prelayernorm.py`. For example,
``` bash
from huggingface.modeling_roberta_prelayernorm import RobertaForMaskedLM, RobertaForSequenceClassification
``` |
Bagus/SER-LSSED | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Bagus/ser-japanese | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
Barleysack/AERoberta2 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-04-28T17:06:44Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-culinary-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-culinary-finetuned
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- F1: 0.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1803 | 0.11 | 500 | 0.1939 | 0.9611 |
| 0.1543 | 0.22 | 1000 | 0.1364 | 0.9669 |
| 0.1213 | 0.32 | 1500 | 0.1487 | 0.9728 |
| 0.1079 | 0.43 | 2000 | 0.0855 | 0.9773 |
| 0.0975 | 0.54 | 2500 | 0.0844 | 0.9831 |
| 0.0855 | 0.65 | 3000 | 0.0785 | 0.9831 |
| 0.0844 | 0.76 | 3500 | 0.0679 | 0.9857 |
| 0.0793 | 0.86 | 4000 | 0.0489 | 0.9890 |
| 0.0864 | 0.97 | 4500 | 0.0399 | 0.9903 |
| 0.049 | 1.08 | 5000 | 0.0528 | 0.9890 |
| 0.0353 | 1.19 | 5500 | 0.0635 | 0.9877 |
| 0.0321 | 1.3 | 6000 | 0.0542 | 0.9903 |
| 0.0311 | 1.41 | 6500 | 0.0559 | 0.9896 |
| 0.0315 | 1.51 | 7000 | 0.0736 | 0.9857 |
| 0.04 | 1.62 | 7500 | 0.0648 | 0.9909 |
| 0.0265 | 1.73 | 8000 | 0.0608 | 0.9909 |
| 0.0443 | 1.84 | 8500 | 0.0617 | 0.9883 |
| 0.0443 | 1.95 | 9000 | 0.0555 | 0.9896 |
| 0.0235 | 2.05 | 9500 | 0.0608 | 0.9903 |
| 0.0139 | 2.16 | 10000 | 0.0613 | 0.9922 |
| 0.0126 | 2.27 | 10500 | 0.0739 | 0.9903 |
| 0.0164 | 2.38 | 11000 | 0.0679 | 0.9903 |
| 0.0172 | 2.49 | 11500 | 0.0606 | 0.9922 |
| 0.0175 | 2.59 | 12000 | 0.0442 | 0.9942 |
| 0.01 | 2.7 | 12500 | 0.0661 | 0.9916 |
| 0.0059 | 2.81 | 13000 | 0.0659 | 0.9929 |
| 0.0216 | 2.92 | 13500 | 0.0504 | 0.9929 |
| 0.0123 | 3.03 | 14000 | 0.0584 | 0.9929 |
| 0.0047 | 3.14 | 14500 | 0.0573 | 0.9929 |
| 0.0123 | 3.24 | 15000 | 0.0511 | 0.9935 |
| 0.0027 | 3.35 | 15500 | 0.0579 | 0.9942 |
| 0.0025 | 3.46 | 16000 | 0.0602 | 0.9935 |
| 0.0051 | 3.57 | 16500 | 0.0598 | 0.9935 |
| 0.0044 | 3.68 | 17000 | 0.0617 | 0.9929 |
| 0.0061 | 3.78 | 17500 | 0.0634 | 0.9935 |
| 0.0048 | 3.89 | 18000 | 0.0672 | 0.9929 |
| 0.0078 | 4.0 | 18500 | 0.0657 | 0.9929 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Barleysack/klue-roberta-LSTM | [
"pytorch",
"roberta",
"transformers"
] | null | {
"architectures": [
"QAWithLSTMModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-04-28T17:15:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: longformer-qmsum-meeting-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-qmsum-meeting-summarization
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2055
- Rouge1: 20.5333
- Rouge2: 7.6756
- Rougel: 16.2531
- Rougelsum: 19.0336
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 5.4071 | 1.09 | 100 | 5.2910 | 6.012 | 0.5556 | 4.936 | 5.6141 | 20.0 |
| 5.269 | 2.17 | 200 | 5.1446 | 6.7419 | 0.9713 | 5.2774 | 6.3003 | 20.0 |
| 5.1153 | 3.26 | 300 | 4.9976 | 8.1369 | 1.2365 | 6.391 | 7.5911 | 20.0 |
| 4.9888 | 4.35 | 400 | 4.8763 | 9.9113 | 1.4239 | 8.0574 | 9.3442 | 20.0 |
| 4.8687 | 5.43 | 500 | 4.7889 | 10.504 | 1.5638 | 8.1191 | 9.817 | 20.0 |
| 4.7936 | 6.52 | 600 | 4.7226 | 12.6475 | 2.4733 | 9.968 | 11.541 | 20.0 |
| 4.713 | 7.61 | 700 | 4.6770 | 15.2998 | 3.6209 | 11.8629 | 14.2323 | 20.0 |
| 4.6843 | 8.7 | 800 | 4.6428 | 15.8299 | 4.4128 | 12.7301 | 14.8795 | 20.0 |
| 4.6453 | 9.78 | 900 | 4.6105 | 16.3702 | 4.7356 | 13.1566 | 15.4497 | 20.0 |
| 4.6212 | 10.87 | 1000 | 4.5849 | 16.9765 | 5.1101 | 13.617 | 15.9401 | 20.0 |
| 4.5761 | 11.96 | 1100 | 4.5649 | 17.3024 | 5.2494 | 13.79 | 16.3173 | 20.0 |
| 4.564 | 13.04 | 1200 | 4.5447 | 18.7699 | 6.2331 | 14.8264 | 17.645 | 20.0 |
| 4.5393 | 14.13 | 1300 | 4.5277 | 19.1495 | 6.6082 | 15.1392 | 18.2546 | 20.0 |
| 4.5069 | 15.22 | 1400 | 4.5132 | 20.3648 | 7.3895 | 16.018 | 19.1503 | 20.0 |
| 4.4985 | 16.3 | 1500 | 4.4973 | 20.165 | 7.3477 | 16.1161 | 18.7585 | 20.0 |
| 4.4476 | 17.39 | 1600 | 4.4859 | 20.4691 | 7.5734 | 16.438 | 19.1045 | 20.0 |
| 4.4421 | 18.48 | 1700 | 4.4758 | 20.4402 | 7.7674 | 16.3998 | 19.1045 | 20.0 |
| 4.4554 | 19.57 | 1800 | 4.4648 | 20.5992 | 7.3522 | 16.185 | 19.2869 | 20.0 |
| 4.4138 | 20.65 | 1900 | 4.4560 | 20.497 | 7.1732 | 16.2177 | 19.0912 | 20.0 |
| 4.4447 | 21.74 | 2000 | 4.4465 | 21.2936 | 7.8856 | 16.8994 | 19.7994 | 20.0 |
| 4.3636 | 22.83 | 2100 | 4.4373 | 21.1015 | 7.6466 | 16.787 | 19.6918 | 20.0 |
| 4.3647 | 23.91 | 2200 | 4.4288 | 21.3408 | 7.8052 | 17.1431 | 20.1456 | 20.0 |
| 4.3707 | 25.0 | 2300 | 4.4217 | 21.523 | 8.017 | 17.1586 | 20.2724 | 20.0 |
| 4.3503 | 26.09 | 2400 | 4.4145 | 21.485 | 8.015 | 17.064 | 20.209 | 20.0 |
| 4.3295 | 27.17 | 2500 | 4.4069 | 21.5167 | 7.6749 | 16.9976 | 20.265 | 20.0 |
| 4.3444 | 28.26 | 2600 | 4.4004 | 21.748 | 7.8808 | 17.1592 | 20.4054 | 20.0 |
| 4.3135 | 29.35 | 2700 | 4.3958 | 21.5523 | 7.5449 | 17.2103 | 20.5405 | 20.0 |
| 4.3028 | 30.43 | 2800 | 4.3880 | 21.3016 | 7.6531 | 17.1515 | 20.3301 | 20.0 |
| 4.3406 | 31.52 | 2900 | 4.3834 | 21.4169 | 7.5647 | 16.9477 | 20.3379 | 20.0 |
| 4.286 | 32.61 | 3000 | 4.3760 | 21.4684 | 7.4776 | 17.1018 | 20.5254 | 20.0 |
| 4.2717 | 33.7 | 3100 | 4.3736 | 21.596 | 7.514 | 17.164 | 20.6272 | 20.0 |
| 4.285 | 34.78 | 3200 | 4.3666 | 21.3495 | 7.676 | 17.0703 | 20.3182 | 20.0 |
| 4.2496 | 35.87 | 3300 | 4.3628 | 21.5539 | 7.6574 | 17.1393 | 20.5116 | 20.0 |
| 4.2618 | 36.96 | 3400 | 4.3591 | 21.08 | 7.6814 | 16.6941 | 20.2386 | 20.0 |
| 4.255 | 38.04 | 3500 | 4.3522 | 21.1979 | 7.7334 | 16.8281 | 20.3095 | 20.0 |
| 4.2353 | 39.13 | 3600 | 4.3502 | 21.1162 | 8.0427 | 16.9948 | 20.3903 | 20.0 |
| 4.2556 | 40.22 | 3700 | 4.3462 | 21.3417 | 7.7851 | 16.6548 | 20.5316 | 20.0 |
| 4.207 | 41.3 | 3800 | 4.3401 | 21.4329 | 7.948 | 16.944 | 20.5075 | 20.0 |
| 4.234 | 42.39 | 3900 | 4.3388 | 21.6109 | 8.033 | 16.9375 | 20.6668 | 20.0 |
| 4.2118 | 43.48 | 4000 | 4.3347 | 21.5051 | 7.9239 | 16.7403 | 20.6123 | 20.0 |
| 4.1898 | 44.57 | 4100 | 4.3319 | 21.2644 | 7.8222 | 16.7109 | 20.3999 | 20.0 |
| 4.1951 | 45.65 | 4200 | 4.3265 | 21.3383 | 7.997 | 16.7605 | 20.4542 | 20.0 |
| 4.1851 | 46.74 | 4300 | 4.3248 | 21.3509 | 7.9038 | 16.9098 | 20.4593 | 20.0 |
| 4.1674 | 47.83 | 4400 | 4.3223 | 21.3516 | 8.0058 | 17.0061 | 20.4199 | 20.0 |
| 4.1785 | 48.91 | 4500 | 4.3182 | 21.4118 | 8.0755 | 16.959 | 20.5154 | 20.0 |
| 4.1599 | 50.0 | 4600 | 4.3175 | 21.2748 | 7.8562 | 16.8107 | 20.3536 | 20.0 |
| 4.1564 | 51.09 | 4700 | 4.3141 | 21.1811 | 7.8563 | 16.7687 | 20.2242 | 20.0 |
| 4.1513 | 52.17 | 4800 | 4.3101 | 21.1557 | 7.6616 | 16.8105 | 19.8191 | 20.0 |
| 4.1234 | 53.26 | 4900 | 4.3083 | 21.0718 | 7.8625 | 16.7849 | 20.0014 | 20.0 |
| 4.1532 | 54.35 | 5000 | 4.3041 | 21.4241 | 7.984 | 16.6561 | 20.3073 | 20.0 |
| 4.1371 | 55.43 | 5100 | 4.3035 | 21.259 | 7.6476 | 16.9931 | 20.3421 | 20.0 |
| 4.1342 | 56.52 | 5200 | 4.3009 | 21.0745 | 7.386 | 16.7976 | 20.1148 | 20.0 |
| 4.1146 | 57.61 | 5300 | 4.2985 | 21.0796 | 7.6743 | 16.5062 | 19.8702 | 20.0 |
| 4.0774 | 58.7 | 5400 | 4.2965 | 21.2129 | 7.2871 | 17.0019 | 20.3176 | 20.0 |
| 4.1726 | 59.78 | 5500 | 4.2930 | 21.159 | 7.4045 | 16.7762 | 19.9886 | 20.0 |
| 4.0931 | 60.87 | 5600 | 4.2900 | 20.957 | 7.2307 | 16.784 | 19.8402 | 20.0 |
| 4.0838 | 61.96 | 5700 | 4.2887 | 21.13 | 7.2664 | 16.7837 | 19.951 | 20.0 |
| 4.0878 | 63.04 | 5800 | 4.2853 | 21.0281 | 7.2664 | 16.6847 | 19.7843 | 20.0 |
| 4.1067 | 64.13 | 5900 | 4.2848 | 20.941 | 7.2307 | 16.74 | 19.8262 | 20.0 |
| 4.0743 | 65.22 | 6000 | 4.2817 | 21.1234 | 7.4612 | 16.755 | 20.027 | 20.0 |
| 4.103 | 66.3 | 6100 | 4.2807 | 21.2852 | 7.4802 | 16.8037 | 20.2316 | 20.0 |
| 4.0434 | 67.39 | 6200 | 4.2777 | 21.236 | 7.3169 | 16.7967 | 20.0534 | 20.0 |
| 4.0829 | 68.48 | 6300 | 4.2793 | 20.947 | 7.3164 | 16.8597 | 19.7938 | 20.0 |
| 4.0619 | 69.57 | 6400 | 4.2736 | 21.4626 | 7.7245 | 16.8395 | 20.2035 | 20.0 |
| 4.079 | 70.65 | 6500 | 4.2729 | 21.163 | 7.6397 | 16.7826 | 20.0295 | 20.0 |
| 4.0411 | 71.74 | 6600 | 4.2721 | 20.8673 | 7.3841 | 16.6784 | 19.6854 | 20.0 |
| 4.046 | 72.83 | 6700 | 4.2697 | 20.9774 | 7.3325 | 16.7779 | 19.761 | 20.0 |
| 4.0384 | 73.91 | 6800 | 4.2684 | 21.0736 | 7.6569 | 16.7631 | 19.992 | 20.0 |
| 4.0401 | 75.0 | 6900 | 4.2670 | 21.2708 | 7.8224 | 16.5649 | 20.2364 | 20.0 |
| 4.0153 | 76.09 | 7000 | 4.2669 | 21.3638 | 7.7586 | 16.765 | 19.9744 | 20.0 |
| 4.0227 | 77.17 | 7100 | 4.2652 | 21.0611 | 7.709 | 16.3201 | 20.0516 | 20.0 |
| 4.0264 | 78.26 | 7200 | 4.2634 | 21.3766 | 7.7666 | 16.7508 | 20.0938 | 20.0 |
| 4.0475 | 79.35 | 7300 | 4.2615 | 21.2356 | 7.5533 | 16.6339 | 19.9254 | 20.0 |
| 4.0145 | 80.43 | 7400 | 4.2580 | 20.7689 | 7.3386 | 16.287 | 19.7335 | 20.0 |
| 4.0087 | 81.52 | 7500 | 4.2580 | 20.9816 | 7.343 | 16.4598 | 19.701 | 20.0 |
| 3.9835 | 82.61 | 7600 | 4.2577 | 21.1001 | 7.5887 | 16.5226 | 19.714 | 20.0 |
| 4.0029 | 83.7 | 7700 | 4.2562 | 21.1875 | 7.7333 | 16.4799 | 19.9907 | 20.0 |
| 3.9912 | 84.78 | 7800 | 4.2549 | 20.8265 | 7.3897 | 16.2191 | 19.4398 | 20.0 |
| 4.008 | 85.87 | 7900 | 4.2541 | 21.4955 | 7.7602 | 16.4989 | 20.1402 | 20.0 |
| 3.9659 | 86.96 | 8000 | 4.2523 | 21.687 | 7.9463 | 16.5832 | 20.1598 | 20.0 |
| 3.9923 | 88.04 | 8100 | 4.2505 | 21.4615 | 7.817 | 16.3628 | 19.9159 | 20.0 |
| 3.9811 | 89.13 | 8200 | 4.2498 | 21.1917 | 7.5813 | 16.3066 | 19.4905 | 20.0 |
| 3.9819 | 90.22 | 8300 | 4.2488 | 21.239 | 7.4585 | 16.4297 | 19.5213 | 20.0 |
| 3.9889 | 91.3 | 8400 | 4.2456 | 21.5052 | 7.7994 | 16.3783 | 19.8739 | 20.0 |
| 3.942 | 92.39 | 8500 | 4.2468 | 21.3482 | 7.7517 | 16.34 | 19.764 | 20.0 |
| 3.9959 | 93.48 | 8600 | 4.2446 | 21.4615 | 7.817 | 16.3628 | 19.9159 | 20.0 |
| 3.987 | 94.57 | 8700 | 4.2438 | 21.1265 | 7.6497 | 16.4132 | 19.5981 | 20.0 |
| 3.9803 | 95.65 | 8800 | 4.2420 | 21.2956 | 7.7796 | 16.3643 | 19.8607 | 20.0 |
| 3.9415 | 96.74 | 8900 | 4.2410 | 20.8332 | 7.5468 | 16.1678 | 19.316 | 20.0 |
| 3.97 | 97.83 | 9000 | 4.2407 | 21.4223 | 7.8688 | 16.533 | 19.8081 | 20.0 |
| 3.9495 | 98.91 | 9100 | 4.2400 | 21.5678 | 7.9698 | 16.5492 | 19.9404 | 20.0 |
| 3.9489 | 100.0 | 9200 | 4.2391 | 21.3928 | 7.8416 | 16.3595 | 19.7579 | 20.0 |
| 3.9194 | 101.09 | 9300 | 4.2394 | 21.2216 | 7.8416 | 16.2499 | 19.5661 | 20.0 |
| 3.966 | 102.17 | 9400 | 4.2372 | 21.2756 | 7.8798 | 16.3124 | 19.6303 | 20.0 |
| 3.934 | 103.26 | 9500 | 4.2367 | 21.3106 | 7.8585 | 16.3937 | 19.7289 | 20.0 |
| 3.9316 | 104.35 | 9600 | 4.2349 | 21.3296 | 7.9392 | 16.3574 | 19.8031 | 20.0 |
| 3.9586 | 105.43 | 9700 | 4.2366 | 21.0662 | 7.771 | 16.2242 | 19.4813 | 20.0 |
| 3.9189 | 106.52 | 9800 | 4.2338 | 21.1348 | 7.8414 | 16.2757 | 19.7301 | 20.0 |
| 3.937 | 107.61 | 9900 | 4.2350 | 21.2434 | 7.7611 | 16.4693 | 19.6923 | 20.0 |
| 3.911 | 108.7 | 10000 | 4.2331 | 21.2697 | 7.8282 | 16.3636 | 19.6627 | 20.0 |
| 3.8956 | 109.78 | 10100 | 4.2312 | 21.2697 | 7.8117 | 16.3636 | 19.6321 | 20.0 |
| 3.9396 | 110.87 | 10200 | 4.2303 | 21.0842 | 7.7105 | 16.221 | 19.4378 | 20.0 |
| 3.9058 | 111.96 | 10300 | 4.2290 | 21.1633 | 7.8117 | 16.3196 | 19.5575 | 20.0 |
| 3.9198 | 113.04 | 10400 | 4.2278 | 21.1633 | 7.8117 | 16.3196 | 19.5311 | 20.0 |
| 3.9104 | 114.13 | 10500 | 4.2276 | 21.0784 | 7.6899 | 16.3248 | 19.5625 | 20.0 |
| 3.915 | 115.22 | 10600 | 4.2282 | 20.9369 | 7.6522 | 16.1615 | 19.4826 | 20.0 |
| 3.8748 | 116.3 | 10700 | 4.2268 | 20.9369 | 7.6522 | 16.1615 | 19.4826 | 20.0 |
| 3.9341 | 117.39 | 10800 | 4.2252 | 21.0067 | 7.7263 | 16.3314 | 19.5589 | 20.0 |
| 3.8713 | 118.48 | 10900 | 4.2253 | 20.7028 | 7.5712 | 16.0398 | 19.2212 | 20.0 |
| 3.8861 | 119.57 | 11000 | 4.2243 | 20.7075 | 7.6844 | 16.0626 | 19.2959 | 20.0 |
| 3.8905 | 120.65 | 11100 | 4.2252 | 20.6546 | 7.5642 | 15.9451 | 19.1838 | 20.0 |
| 3.8682 | 121.74 | 11200 | 4.2238 | 20.8809 | 7.6536 | 16.1667 | 19.4217 | 20.0 |
| 3.904 | 122.83 | 11300 | 4.2241 | 20.6916 | 7.5324 | 15.9692 | 19.1791 | 20.0 |
| 3.8577 | 123.91 | 11400 | 4.2231 | 20.9271 | 7.6536 | 16.2314 | 19.4695 | 20.0 |
| 3.8851 | 125.0 | 11500 | 4.2230 | 20.8097 | 7.6891 | 16.1087 | 19.3872 | 20.0 |
| 3.8725 | 126.09 | 11600 | 4.2219 | 20.8965 | 7.6891 | 16.197 | 19.4319 | 20.0 |
| 3.8918 | 127.17 | 11700 | 4.2210 | 20.8203 | 7.6562 | 16.1283 | 19.388 | 20.0 |
| 3.845 | 128.26 | 11800 | 4.2210 | 20.7633 | 7.6883 | 16.0813 | 19.3537 | 20.0 |
| 3.8812 | 129.35 | 11900 | 4.2197 | 20.6605 | 7.6351 | 15.9703 | 19.2425 | 20.0 |
| 3.8734 | 130.43 | 12000 | 4.2208 | 20.6164 | 7.601 | 15.9703 | 19.1967 | 20.0 |
| 3.8704 | 131.52 | 12100 | 4.2201 | 20.533 | 7.5141 | 15.941 | 19.1898 | 20.0 |
| 3.8302 | 132.61 | 12200 | 4.2194 | 20.6164 | 7.601 | 15.9703 | 19.1967 | 20.0 |
| 3.8793 | 133.7 | 12300 | 4.2178 | 20.5427 | 7.5674 | 15.9591 | 19.2078 | 20.0 |
| 3.8631 | 134.78 | 12400 | 4.2181 | 20.6953 | 7.6549 | 16.0402 | 19.2734 | 20.0 |
| 3.8565 | 135.87 | 12500 | 4.2173 | 20.6168 | 7.5808 | 16.0402 | 19.2734 | 20.0 |
| 3.8842 | 136.96 | 12600 | 4.2163 | 20.6525 | 7.5782 | 16.0402 | 19.3124 | 20.0 |
| 3.8183 | 138.04 | 12700 | 4.2165 | 20.6168 | 7.5808 | 16.0402 | 19.2734 | 20.0 |
| 3.8482 | 139.13 | 12800 | 4.2155 | 20.6953 | 7.6154 | 16.0402 | 19.2734 | 20.0 |
| 3.8689 | 140.22 | 12900 | 4.2158 | 20.8264 | 7.7844 | 16.1396 | 19.4834 | 20.0 |
| 3.8361 | 141.3 | 13000 | 4.2144 | 20.8264 | 7.6986 | 16.2466 | 19.5192 | 20.0 |
| 3.8336 | 142.39 | 13100 | 4.2148 | 20.7613 | 7.7027 | 16.2516 | 19.4307 | 20.0 |
| 3.8532 | 143.48 | 13200 | 4.2155 | 20.6905 | 7.6695 | 16.1708 | 19.3584 | 20.0 |
| 3.8424 | 144.57 | 13300 | 4.2137 | 20.7613 | 7.7027 | 16.2516 | 19.4307 | 20.0 |
| 3.8781 | 145.65 | 13400 | 4.2128 | 20.6905 | 7.6695 | 16.1708 | 19.3584 | 20.0 |
| 3.8693 | 146.74 | 13500 | 4.2128 | 20.5395 | 7.4561 | 16.1388 | 19.1866 | 20.0 |
| 3.8304 | 147.83 | 13600 | 4.2123 | 20.6345 | 7.7324 | 16.1761 | 19.2764 | 20.0 |
| 3.8434 | 148.91 | 13700 | 4.2123 | 20.7145 | 7.6768 | 16.1729 | 19.3787 | 20.0 |
| 3.8348 | 150.0 | 13800 | 4.2123 | 20.7859 | 7.7023 | 16.2986 | 19.4932 | 20.0 |
| 3.8375 | 151.09 | 13900 | 4.2126 | 20.6319 | 7.5676 | 16.325 | 19.2512 | 20.0 |
| 3.8421 | 152.17 | 14000 | 4.2120 | 20.6665 | 7.5619 | 16.3257 | 19.2911 | 20.0 |
| 3.831 | 153.26 | 14100 | 4.2110 | 20.609 | 7.4912 | 16.2881 | 19.2953 | 20.0 |
| 3.8172 | 154.35 | 14200 | 4.2112 | 20.7352 | 7.6588 | 16.2115 | 19.3408 | 20.0 |
| 3.7853 | 155.43 | 14300 | 4.2107 | 20.6635 | 7.5987 | 16.2131 | 19.2667 | 20.0 |
| 3.8274 | 156.52 | 14400 | 4.2109 | 20.7352 | 7.7559 | 16.3035 | 19.3408 | 20.0 |
| 3.8362 | 157.61 | 14500 | 4.2099 | 20.7559 | 7.6865 | 16.325 | 19.4191 | 20.0 |
| 3.8561 | 158.7 | 14600 | 4.2098 | 20.6225 | 7.6943 | 16.3448 | 19.1425 | 20.0 |
| 3.7832 | 159.78 | 14700 | 4.2098 | 20.6307 | 7.6684 | 16.2469 | 19.269 | 20.0 |
| 3.8409 | 160.87 | 14800 | 4.2092 | 20.683 | 7.7924 | 16.2986 | 19.2414 | 20.0 |
| 3.821 | 161.96 | 14900 | 4.2092 | 20.5235 | 7.6721 | 16.2191 | 18.9879 | 20.0 |
| 3.8343 | 163.04 | 15000 | 4.2089 | 20.5235 | 7.6721 | 16.2191 | 18.9879 | 20.0 |
| 3.8279 | 164.13 | 15100 | 4.2087 | 20.5304 | 7.5448 | 16.2106 | 19.0909 | 20.0 |
| 3.7874 | 165.22 | 15200 | 4.2083 | 20.6319 | 7.6145 | 16.3035 | 19.2294 | 20.0 |
| 3.8316 | 166.3 | 15300 | 4.2076 | 20.5759 | 7.6145 | 16.2528 | 19.1508 | 20.0 |
| 3.7817 | 167.39 | 15400 | 4.2084 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8338 | 168.48 | 15500 | 4.2075 | 20.5375 | 7.614 | 16.2509 | 19.1047 | 20.0 |
| 3.8515 | 169.57 | 15600 | 4.2069 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.7895 | 170.65 | 15700 | 4.2074 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8129 | 171.74 | 15800 | 4.2076 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8582 | 172.83 | 15900 | 4.2073 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.7716 | 173.91 | 16000 | 4.2073 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8142 | 175.0 | 16100 | 4.2069 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8186 | 176.09 | 16200 | 4.2068 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8323 | 177.17 | 16300 | 4.2065 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.774 | 178.26 | 16400 | 4.2064 | 20.5724 | 7.677 | 16.2545 | 19.0747 | 20.0 |
| 3.8123 | 179.35 | 16500 | 4.2062 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.7914 | 180.43 | 16600 | 4.2066 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.7988 | 181.52 | 16700 | 4.2063 | 20.5724 | 7.6287 | 16.2545 | 19.0747 | 20.0 |
| 3.8331 | 182.61 | 16800 | 4.2059 | 20.6225 | 7.7265 | 16.3103 | 19.1036 | 20.0 |
| 3.8125 | 183.7 | 16900 | 4.2061 | 20.494 | 7.5897 | 16.2303 | 18.9697 | 20.0 |
| 3.8069 | 184.78 | 17000 | 4.2059 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7933 | 185.87 | 17100 | 4.2058 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.807 | 186.96 | 17200 | 4.2058 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8 | 188.04 | 17300 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.776 | 189.13 | 17400 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7976 | 190.22 | 17500 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8293 | 191.3 | 17600 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7807 | 192.39 | 17700 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8246 | 193.48 | 17800 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7719 | 194.57 | 17900 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8055 | 195.65 | 18000 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7803 | 196.74 | 18100 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8287 | 197.83 | 18200 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8066 | 198.91 | 18300 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8011 | 200.0 | 18400 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BatuhanYilmaz/bert-finetuned-nerxD | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-28T18:05:39Z | ---
pipeline_tag: text-classification
license: apache-2.0
language:
- it
tags:
- cross-encoder
- sentence-similarity
- transformers
---
# Cross-Encoder
The model can be used for Information Retrieval: given a query, encode the query will all possible passages. Then sort the passages in a decreasing order.
<p align="center">
<img src="https://www.exibart.com/repository/media/2020/07/bridget-riley-cool-edge.jpg" width="400"> </br>
Bridget Riley, COOL EDGE
</p>
## Training Data
This model was trained on a custom biomedical ranking dataset.
## Usage and Performance
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('efederici/cross-encoder-distilbert-it')
scores = model.predict([('Sentence 1', 'Sentence 2'), ('Sentence 3', 'Sentence 4')])
```
The model will predict scores for the pairs `('Sentence 1', 'Sentence 2')` and `('Sentence 3', 'Sentence 4')`. |
BatuhanYilmaz/dummy-model | [
"tf",
"camembert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-shuffled_take3-small
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 11.883
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-shuffled_take3-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4505
- Rouge1: 11.883
- Rouge2: 9.4784
- Rougel: 10.9978
- Rougelsum: 11.5961
- Gen Len: 18.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 0.5205 | 1.0 | 34008 | 0.4505 | 11.883 | 9.4784 | 10.9978 | 11.5961 | 18.9834 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BatuhanYilmaz/dummy | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-cased-finetuned-log-parser-winlogbeat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-log-parser-winlogbeat
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Baybars/debateGPT | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Model trained on sonobois convos |
BearThreat/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
tags:
- conversational
---
# DialoGPT-medium Model of Simpsons Episode s1e10 "Homer's Night Out"
|
Bee-Garbs/DialoGPT-real-cartman-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language:
- ru
tags:
- PyTorch
- Transformers
thumbnail: "https://github.com/sberbank-ai/model-zoo"
---
# ebanko-base
Model was finetuned by [black_samorez](https://github.com/BlackSamorez).
Based off [sberbank-ai/ruT5-base](https://huggingface.co/sberbank-ai/ruT5-base).
Finetuned on [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments) and [
russe_detox_2022](https://github.com/skoltech-nlp/russe_detox_2022) train to toxify text.
* Task: `text2text generation`
* Type: `encoder-decoder`
* Tokenizer: `bpe`
* Dict size: `32 101 `
* Num Parameters: `737 M`
---
license: apache-2.0
---
|
Bella4322/Sarah | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
BenDavis71/GPT-2-Finetuning-AIRaid | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/inversebrah/1664000969650/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547362404061052928/WWnVS98w_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">smolting (wassie, verse)</div>
<div style="text-align: center; font-size: 14px;">@inversebrah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from smolting (wassie, verse).
| Data | smolting (wassie, verse) |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 1592 |
| Short tweets | 865 |
| Tweets kept | 760 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mt8mw7j5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @inversebrah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/37fqg9kh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/37fqg9kh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/inversebrah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BenQLange/HF_bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- plo_dunfiltered_config
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert_scivocab_uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: plo_dunfiltered_config
type: plo_dunfiltered_config
args: PLODunfiltered
metrics:
- name: Precision
type: precision
value: 0.964925429790286
- name: Recall
type: recall
value: 0.9612323892385586
- name: F1
type: f1
value: 0.9630753691636831
- name: Accuracy
type: accuracy
value: 0.9593916827485913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-finetuned-ner
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the plo_dunfiltered_config dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1390
- Precision: 0.9649
- Recall: 0.9612
- F1: 0.9631
- Accuracy: 0.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1176 | 1.4 | 5000 | 0.1243 | 0.9570 | 0.9511 | 0.9540 | 0.9502 |
| 0.0973 | 2.81 | 10000 | 0.1129 | 0.9609 | 0.9572 | 0.9590 | 0.9553 |
| 0.0721 | 4.21 | 15000 | 0.1198 | 0.9645 | 0.9585 | 0.9615 | 0.9578 |
| 0.0634 | 5.62 | 20000 | 0.1259 | 0.9649 | 0.9589 | 0.9619 | 0.9582 |
| 0.0572 | 7.02 | 25000 | 0.1321 | 0.9653 | 0.9609 | 0.9631 | 0.9594 |
| 0.0472 | 8.43 | 30000 | 0.1390 | 0.9649 | 0.9612 | 0.9631 | 0.9594 |
| 0.0434 | 9.83 | 35000 | 0.1442 | 0.9656 | 0.9613 | 0.9634 | 0.9598 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BenWitter/DialoGPT-small-Tyrion | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sathira/autotrain-data-mbtiNlp
co2_eq_emissions: 121.67185089502216
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 798824628
- CO2 Emissions (in grams): 121.67185089502216
## Validation Metrics
- Loss: 0.5046824812889099
- Accuracy: 0.8472124039775673
- Macro F1: 0.7812978033330673
- Micro F1: 0.8472124039775673
- Weighted F1: 0.8464983956259307
- Macro Precision: 0.812208631055716
- Micro Precision: 0.8472124039775673
- Weighted Precision: 0.8478968364150775
- Macro Recall: 0.7593223884993787
- Micro Recall: 0.8472124039775673
- Weighted Recall: 0.8472124039775673
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sathira/autotrain-mbtiNlp-798824628
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sathira/autotrain-mbtiNlp-798824628", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: wav2vec2-vorarlbergerisch
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vorarlbergerisch
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9241
- Wer: 0.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 62
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.6837 | 3.83 | 100 | 3.7188 | 1.0 |
| 3.33 | 7.68 | 200 | 3.0620 | 1.0 |
| 2.9508 | 11.53 | 300 | 2.5915 | 1.0101 |
| 1.8954 | 15.38 | 400 | 1.6930 | 0.8243 |
| 1.231 | 19.23 | 500 | 1.7179 | 0.7551 |
| 0.9862 | 23.08 | 600 | 1.5237 | 0.6529 |
| 0.7353 | 26.91 | 700 | 1.5119 | 0.5921 |
| 0.5368 | 30.75 | 800 | 1.5011 | 0.5574 |
| 0.4448 | 34.6 | 900 | 1.5334 | 0.5363 |
| 0.3278 | 38.45 | 1000 | 1.7125 | 0.5144 |
| 0.2575 | 42.3 | 1100 | 1.6529 | 0.4958 |
| 0.1966 | 46.15 | 1200 | 1.7670 | 0.4848 |
| 0.1552 | 49.98 | 1300 | 1.7586 | 0.4620 |
| 0.1118 | 53.83 | 1400 | 1.7912 | 0.4417 |
| 0.0847 | 57.68 | 1500 | 1.8709 | 0.4443 |
| 0.0654 | 61.53 | 1600 | 1.9241 | 0.4358 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3170
- Accuracy: 0.8733
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- multilingual
- PyTorch
- Transformers
- gpt3
- gpt2
- Deepspeed
- Megatron
datasets:
- mc4
- Wikipedia
widget:
- text: "I know you're tired, but can we go for another walk this evening?\npeter szemraj:\n\n"
example_title: "walk"
- text: "What do you call an alligator who's just had surgery to remove his left arm?\npeter szemraj:\n\n"
example_title: "alligator"
- text: "If you could live anywhere, where would it be?\npeter szemraj:\n\n"
example_title: "dream living place"
- text: "What really makes you angry?\npeter szemraj:\n\n"
example_title: "pet peeve"
- text: "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\npeter szemraj:\n\n"
example_title: "language"
- text: "What would you change about yourself if you could?\npeter szemraj:\n\n"
example_title: "change"
- text: "My first is in Asia, my second is in Europe, my third is in North America, and my fourth is in South America. What am I?\npeter szemraj:\n\n"
example_title: "continent"
- text: "Can you take me for dinner somewhere nice this time?\npeter szemraj:\n\n"
example_title: "dinner"
- text: "Honey, I have clogged the toilet for the third time this month.. sorry..\npeter szemraj:\n\n"
example_title: "overflow"
- text: "A man pushes his car to a hotel and tells the owner he's bankrupt. Why?\npeter szemraj:\n\n"
example_title: "brain teaser"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.4
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.95
top_k: 30
temperature: 0.65
repetition_penalty: 3.5
---
# mGPT: fine-tune on message data MWE
This model is a fine-tuned version of [sberbank-ai/mGPT](https://huggingface.co/sberbank-ai/mGPT) on 80k messages. Trained for one epoch, will be updated in a (separate) model repo later.
## Model description
- testing if fine-tuned personality data bleeds over to other languages without being trained in them explicitly
### Usage in python
Install the transformers library if you don't have it:
```
pip install -U transformers
```
load the model into a pipeline object:
```
from transformers import pipeline
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
my_chatbot = pipeline('text-generation',
'pszemraj/mGPT-Peter-mwe',
device=0 if device == 'cuda' else -1,
)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/FormalRobertaa | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ko-en-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ko-en-finetuned-ko-to-en3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt-ko-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1864
- Bleu: 0.7037
- Gen Len: 11.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 0.99 | 119 | 4.4541 | 0.0 | 5.0 |
| No log | 1.99 | 238 | 2.4214 | 0.3414 | 16.0 |
| No log | 2.99 | 357 | 2.2158 | 0.3212 | 15.0 |
| No log | 3.99 | 476 | 2.1737 | 0.3283 | 12.0 |
| 3.2958 | 4.99 | 595 | 2.1864 | 0.7037 | 11.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/FroBurta | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech_asr
- librispeech 960h
license: cc-by-4.0
---
## ESPnet2 model
This model was trained by Chaitanya Narisetty using recipe in [espnet](https://github.com/espnet/espnet/).
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Apr 26 15:33:18 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.8.1+cu111`
- Git hash: `8a76ff24eb513d96561fb47d0320dd39c1c3645a`
- Commit date: `Tue Apr 19 07:32:58 2022 +0000`
## asr_train_conformer-rnn_transducer_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.loss.ave_10best/dev_clean|2703|54402|97.7|2.1|0.2|0.3|2.6|31.5|
|decode_asr_model_valid.loss.ave_10best/dev_other|2864|50948|93.8|5.6|0.6|0.6|6.8|50.8|
|decode_asr_model_valid.loss.ave_10best/test_clean|2620|52576|97.5|2.3|0.2|0.3|2.8|32.7|
|decode_asr_model_valid.loss.ave_10best/test_other|2939|52343|94.1|5.3|0.6|0.7|6.6|51.8|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|54402|98.0|1.8|0.2|0.2|2.2|28.2|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|50948|94.8|4.5|0.7|0.5|5.7|45.1|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.4|29.3|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|52343|94.9|4.3|0.7|0.5|5.6|47.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.loss.ave_10best/dev_clean|2703|288456|99.4|0.4|0.3|0.2|0.9|31.5|
|decode_asr_model_valid.loss.ave_10best/dev_other|2864|265951|97.7|1.4|0.9|0.8|3.0|50.8|
|decode_asr_model_valid.loss.ave_10best/test_clean|2620|281530|99.4|0.4|0.3|0.3|0.9|32.7|
|decode_asr_model_valid.loss.ave_10best/test_other|2939|272758|97.9|1.2|0.9|0.8|2.8|51.8|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|288456|99.4|0.3|0.3|0.2|0.8|28.2|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|265951|97.9|1.1|1.0|0.6|2.7|45.1|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.9|29.3|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|272758|98.1|0.9|1.0|0.6|2.5|47.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.loss.ave_10best/dev_clean|2703|68010|97.2|2.1|0.7|0.4|3.3|31.5|
|decode_asr_model_valid.loss.ave_10best/dev_other|2864|63110|92.7|5.6|1.7|1.2|8.6|50.8|
|decode_asr_model_valid.loss.ave_10best/test_clean|2620|65818|97.0|2.2|0.9|0.4|3.4|32.7|
|decode_asr_model_valid.loss.ave_10best/test_other|2939|65101|93.0|5.1|1.9|1.0|8.0|51.8|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|68010|97.5|1.8|0.8|0.4|2.9|28.2|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|63110|93.5|4.5|1.9|0.9|7.4|45.1|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|65818|97.3|1.9|0.8|0.4|3.0|29.3|
|decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|65101|93.9|4.1|1.9|0.8|6.9|47.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/transducer/train_conformer-rnn_transducer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_conformer-rnn_transducer_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 46179
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 25
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0015
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
input_size: null
init: null
model_conf:
transducer_weight: 1.0
auxiliary_ctc_weight: 0.3
report_cer: true
report_wer: true
encoder_conf:
main_conf:
pos_wise_layer_type: linear
pos_wise_act_type: swish
pos_enc_layer_type: rel_pos
conv_mod_act_type: swish
input_conf:
block_type: conv2d
dropout_rate_pos_enc: 0.1
dim_output: 512
dim_conv: 512
body_conf:
- block_type: conformer
dim_linear: 2048
dim_hidden: 512
heads: 8
dropout_rate: 0.1
dropout_rate_pos_enc: 0.1
dropout_rate_pos_wise: 0.1
dropout_rate_att: 0.1
normalize_before: true
macaron_style: true
conv_mod_kernel: 31
num_blocks: 12
joint_network_conf:
dim_joint_space: 640
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
decoder: rnn
decoder_conf:
rnn_type: lstm
num_layers: 1
dim_embedding: 512
dim_hidden: 512
dropout: 0.1
dropout_embed: 0.2
required:
- output_dir
- token_list
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
BigSalmon/Lincoln4 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-04-29T07:26:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-ko-to-en3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en3
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5175
- Bleu: 75.215
- Gen Len: 9.726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 0.99 | 103 | 2.7756 | 8.9955 | 9.425 |
| No log | 1.99 | 206 | 0.7248 | 63.7645 | 9.6421 |
| No log | 2.99 | 309 | 0.5175 | 75.215 | 9.726 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/MrLincoln | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: multilingual
widget:
- text: "In editing the Fragments , I have availed myself of Mr . R . Ellis ’ acute remarks on them in the Cambridge Journal of Philology , Vol . IV , and that I am largely indebted , as every editor must now be , to the edition of the Tragic Fragments by A . Nauck , Leipzig , 1856 ."
- text: "459 . Skyros klang dem Athener etwa wie Pholegandros und Sikinos bei Solon Eleg . 1 , 4 , dem Römer Ulubrae , Butunti ."
- text: "Celles d ’ Ajax et des siens occupaient l ' extrême aile gauche , vers le promontoire Rhétée , et confinaient tout à la fois au retranchement et à la mer ( // . XIT1 , 681 ; Heynce , excursns cité ) ,"
license: mit
---
|
BigSalmon/MrLincoln11 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514648481281056772/ACunKh0I_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484924573032148993/qdB7hbSU_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1341030286386192386/TzEiVCaJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth)</div>
<div style="text-align: center; font-size: 14px;">@cokedupoptions-greg16676935420-parikpatelcfa</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from greg & John W. Rich (Fake Tech Exec) & Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth).
| Data | greg | John W. Rich (Fake Tech Exec) | Dr. Parik Patel, BA, CFA, ACCA Esq. (drpatel.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3247 | 3250 |
| Retweets | 27 | 202 | 22 |
| Short tweets | 664 | 331 | 719 |
| Tweets kept | 2556 | 2714 | 2509 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/snhk0760/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cokedupoptions-greg16676935420-parikpatelcfa's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/iresidwo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/iresidwo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cokedupoptions-greg16676935420-parikpatelcfa')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/MrLincoln12 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags: autotrain
language: unk
widget:
- text: "ACE2 overexpression in AAV cell lines"
datasets:
- Mim/autotrain-data-procell-expert
co2_eq_emissions: 0.004814823138367317
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 800724769
- CO2 Emissions (in grams): 0.004814823138367317
## Validation Metrics
- Loss: 0.4749071002006531
- Accuracy: 0.9
- Precision: 0.8928571428571429
- Recall: 0.9615384615384616
- AUC: 0.9065934065934066
- F1: 0.9259259259259259
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Mim/autotrain-procell-expert-800724769
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Mim/autotrain-procell-expert-800724769", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Mim/autotrain-procell-expert-800724769", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BigSalmon/MrLincoln125MNeo | [
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: de
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
license: apache-2.0
---
# doc2query/msmarco-german-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-german-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python ist eine universelle, üblicherweise interpretierte, höhere Programmiersprache. Sie hat den Anspruch, einen gut lesbaren, knappen Programmierstil zu fördern. So werden beispielsweise Blöcke nicht durch geschweifte Klammern, sondern durch Einrückungen strukturiert."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/MrLincoln7 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
A fancy weekly newsletter generator for Wonderflow Development team. NOTE: Use with caution.
To use this model, first load it:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("mpangrazzi/wonderflow_newsletter")
model = AutoModelForCausalLM.from_pretrained("mpangrazzi/wonderflow_newsletter")
```
Then, use a `pipeline` to get predictions:
```python
from transformers import pipeline
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
inputs = ["This week the development team"]
samples = text_generator(
inputs,
do_sample=True,
max_length=150,
num_return_sequences=5,
num_beans=5,
top_p=0.90,
temperature=1.3
)
outputs = [entry["generated_text"] for sample in samples for entry in sample]
for entry in outputs:
print(f"{entry}\n\n")
```
|
BigSalmon/ParaphraseParentheses | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-04-29T11:39:55Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-finetuned-powo_all
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-powo_all
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -343, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/ParaphraseParentheses2.0 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language: ar
datasets:
- unicamp-dl/mmarco
widget:
- text: "بايثون (بالإنجليزية: Python) هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً."
license: apache-2.0
---
# doc2query/msmarco-arabic-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-arabic-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "بايثون (بالإنجليزية: Python) هي لغة برمجة، عالية المستوى سهلة التعلم مفتوحة المصدر قابلة للتوسيع، تعتمد أسلوب البرمجة الكائنية (OOP). لغة بايثون هي لغة مُفسَّرة، ومُتعدِدة الاستخدامات، وتستخدم بشكل واسع في العديد من المجالات، كبناء البرامج المستقلة باستخدام الواجهات الرسومية وفي تطبيقات الويب، ويمكن استخدامها كلغة برمجة نصية للتحكم في أداء العديد من البرمجيات مثل بلندر. بشكل عام، يمكن استخدام بايثون لعمل البرامج البسيطة للمبتدئين، ولإنجاز المشاريع الضخمة في الوقت نفسه. غالباً ما يُنصح المبتدؤون في ميدان البرمجة بتعلم هذه اللغة لأنها من بين أسرع اللغات البرمجية تعلماً."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/Points | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language: zh
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"
license: apache-2.0
---
# doc2query/msmarco-chinese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-chinese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python(英國發音:/ˈpaɪθən/ 美國發音:/ˈpaɪθɑːn/),是一种广泛使用的解释型、高级和通用的编程语言。Python支持多种编程范型,包括函数式、指令式、反射式、结构化和面向对象编程。它拥有动态类型系统和垃圾回收功能,能够自动管理内存使用,并且其本身拥有一个巨大而广泛的标准库。它的语言结构以及面向对象的方法旨在帮助程序员为小型的和大型的项目编写清晰的、合乎逻辑的代码。"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/SimplifyText | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | 2022-04-29T11:54:14Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: maxime7770/model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# maxime7770/model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1211
- Validation Loss: 0.4812
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 650, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5966 | 1.5898 | 0 |
| 1.5577 | 1.5576 | 1 |
| 1.5034 | 1.4761 | 2 |
| 1.4034 | 1.3538 | 3 |
| 1.2864 | 1.2163 | 4 |
| 1.1502 | 1.0980 | 5 |
| 1.0085 | 0.9988 | 6 |
| 0.8828 | 0.9130 | 7 |
| 0.7863 | 0.8445 | 8 |
| 0.7036 | 0.7871 | 9 |
| 0.6322 | 0.7399 | 10 |
| 0.5731 | 0.7030 | 11 |
| 0.5180 | 0.6714 | 12 |
| 0.4757 | 0.6432 | 13 |
| 0.4366 | 0.6204 | 14 |
| 0.4057 | 0.6006 | 15 |
| 0.3743 | 0.5827 | 16 |
| 0.3475 | 0.5689 | 17 |
| 0.3221 | 0.5577 | 18 |
| 0.2971 | 0.5467 | 19 |
| 0.2815 | 0.5372 | 20 |
| 0.2700 | 0.5297 | 21 |
| 0.2521 | 0.5225 | 22 |
| 0.2343 | 0.5168 | 23 |
| 0.2265 | 0.5117 | 24 |
| 0.2143 | 0.5074 | 25 |
| 0.2063 | 0.5038 | 26 |
| 0.1941 | 0.5001 | 27 |
| 0.1843 | 0.4976 | 28 |
| 0.1782 | 0.4949 | 29 |
| 0.2012 | 0.4938 | 30 |
| 0.1691 | 0.4930 | 31 |
| 0.1626 | 0.4910 | 32 |
| 0.1884 | 0.4886 | 33 |
| 0.1547 | 0.4870 | 34 |
| 0.1492 | 0.4858 | 35 |
| 0.1445 | 0.4850 | 36 |
| 0.1415 | 0.4842 | 37 |
| 0.1383 | 0.4836 | 38 |
| 0.1374 | 0.4832 | 39 |
| 0.1336 | 0.4826 | 40 |
| 0.1322 | 0.4823 | 41 |
| 0.1295 | 0.4820 | 42 |
| 0.1268 | 0.4818 | 43 |
| 0.1261 | 0.4816 | 44 |
| 0.1253 | 0.4815 | 45 |
| 0.1275 | 0.4814 | 46 |
| 0.1247 | 0.4812 | 47 |
| 0.1256 | 0.4812 | 48 |
| 0.1211 | 0.4812 | 49 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BigSalmon/T5F | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | null | ---
language: id
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
license: apache-2.0
---
# doc2query/msmarco-indonesian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-indonesian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python adalah bahasa pemrograman tujuan umum yang ditafsirkan, tingkat tinggi. Dibuat oleh Guido van Rossum dan pertama kali dirilis pada tahun 1991, filosofi desain Python menekankan keterbacaan kode dengan penggunaan spasi putih yang signifikan. Konstruksi bahasanya dan pendekatan berorientasi objek bertujuan untuk membantu pemrogram menulis kode yang jelas dan logis untuk proyek skala kecil dan besar."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/T5Salmon | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | null | ---
language: it
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
license: apache-2.0
---
# doc2query/msmarco-italian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-italian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python è un linguaggio di programmazione di alto livello, orientato a oggetti, adatto, tra gli altri usi, a sviluppare applicazioni distribuite, scripting, computazione numerica e system testing."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/T5Salmon2 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 13 | null | ---
language: ja
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
license: apache-2.0
---
# doc2query/msmarco-japanese-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-japanese-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python(パイソン)はインタープリタ型の高水準汎用プログラミング言語である。グイド・ヴァン・ロッサムにより創り出され、1991年に最初にリリースされたPythonの設計哲学は、有意なホワイトスペース(オフサイドルール)の顕著な使用によってコードの可読性を重視している。その言語構成とオブジェクト指向のアプローチは、プログラマが小規模なプロジェクトから大規模なプロジェクトまで、明確で論理的なコードを書くのを支援することを目的としている。"
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigSalmon/prepositions | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ru
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
license: apache-2.0
---
# doc2query/msmarco-russian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-russian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BigTooth/DialoGPT-Megumin | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
language: es
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
license: apache-2.0
---
# doc2query/msmarco-spanish-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-spanish-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python es un lenguaje de alto nivel de programación interpretado cuya filosofía hace hincapié en la legibilidad de su código, se utiliza para desarrollar aplicaciones de todo tipo, ejemplos: Instagram, Netflix, Panda 3D, entre otros.2 Se trata de un lenguaje de programación multiparadigma, ya que soporta parcialmente la orientación a objetos, programación imperativa y, en menor medida, programación funcional. Es un lenguaje interpretado, dinámico y multiplataforma."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
BlueGamerBeast/DialoGPT-small-Morgana | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- de
- fr
- en
- ro
- zh
thumbnail:
tags:
- sentence alignment
license: bsd-3-clause
---
# AWESOME: Aligning Word Embedding Spaces of Multilingual Encoders
This model comes from the following GitHub repository: [https://github.com/neulab/awesome-align](https://github.com/neulab/awesome-align)
It corresponds to this paper: [https://arxiv.org/abs/2101.08231](https://arxiv.org/abs/2101.08231)
Please cite the original paper if you decide to use the model:
```
@inproceedings{dou2021word,
title={Word Alignment by Fine-tuning Embeddings on Parallel Corpora},
author={Dou, Zi-Yi and Neubig, Graham},
booktitle={Conference of the European Chapter of the Association for Computational Linguistics (EACL)},
year={2021}
}
```
`awesome-align` is a tool that can extract word alignments from multilingual BERT (mBERT) [Demo](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) and allows you to fine-tune mBERT on parallel corpora for better alignment quality (see our paper for more details).
## Usage (copied from this [DEMO](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) )
```python
from transformers import AutoModel, AutoTokenizer
import itertools
import torch
# load model
model = AutoModel.from_pretrained("aneuraz/awesome-align-with-co")
tokenizer = AutoTokenizer.from_pretrained("aneuraz/awesome-align-with-co")
# model parameters
align_layer = 8
threshold = 1e-3
# define inputs
src = 'awesome-align is awesome !'
tgt = '牛对齐 是 牛 !'
# pre-processing
sent_src, sent_tgt = src.strip().split(), tgt.strip().split()
token_src, token_tgt = [tokenizer.tokenize(word) for word in sent_src], [tokenizer.tokenize(word) for word in sent_tgt]
wid_src, wid_tgt = [tokenizer.convert_tokens_to_ids(x) for x in token_src], [tokenizer.convert_tokens_to_ids(x) for x in token_tgt]
ids_src, ids_tgt = tokenizer.prepare_for_model(list(itertools.chain(*wid_src)), return_tensors='pt', model_max_length=tokenizer.model_max_length, truncation=True)['input_ids'], tokenizer.prepare_for_model(list(itertools.chain(*wid_tgt)), return_tensors='pt', truncation=True, model_max_length=tokenizer.model_max_length)['input_ids']
sub2word_map_src = []
for i, word_list in enumerate(token_src):
sub2word_map_src += [i for x in word_list]
sub2word_map_tgt = []
for i, word_list in enumerate(token_tgt):
sub2word_map_tgt += [i for x in word_list]
# alignment
align_layer = 8
threshold = 1e-3
model.eval()
with torch.no_grad():
out_src = model(ids_src.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1]
out_tgt = model(ids_tgt.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1]
dot_prod = torch.matmul(out_src, out_tgt.transpose(-1, -2))
softmax_srctgt = torch.nn.Softmax(dim=-1)(dot_prod)
softmax_tgtsrc = torch.nn.Softmax(dim=-2)(dot_prod)
softmax_inter = (softmax_srctgt > threshold)*(softmax_tgtsrc > threshold)
align_subwords = torch.nonzero(softmax_inter, as_tuple=False)
align_words = set()
for i, j in align_subwords:
align_words.add( (sub2word_map_src[i], sub2word_map_tgt[j]) )
print(align_words)
```
|
Bman/DialoGPT-medium-harrypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Pytorch Port of [EmoRoberta model](https://huggingface.co/arpanghoshal/EmoRoBERTa). |
BobBraico/distilbert-base-uncased-finetuned-imdb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-29T15:31:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7411
- Wer: 0.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0773 | 13.89 | 500 | 3.1073 | 1.0 |
| 1.2444 | 27.78 | 1000 | 0.7411 | 0.5600 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
BogdanKuloren/continual-learning-paper-embeddings-model | [
"pytorch",
"mpnet",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"MPNetModel"
],
"model_type": "mpnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-ko-to-en4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-ko-to-en4
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4598
- Bleu: 85.3745
- Gen Len: 9.7522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 105 | 1.8667 | 24.5072 | 9.523 |
| No log | 2.0 | 210 | 0.8581 | 57.9973 | 9.2779 |
| No log | 3.0 | 315 | 0.6587 | 69.4588 | 9.7399 |
| No log | 4.0 | 420 | 0.5762 | 74.5636 | 9.6775 |
| 1.4539 | 5.0 | 525 | 0.5254 | 78.8897 | 9.6946 |
| 1.4539 | 6.0 | 630 | 0.4952 | 81.0054 | 9.7073 |
| 1.4539 | 7.0 | 735 | 0.4773 | 83.0792 | 9.7233 |
| 1.4539 | 8.0 | 840 | 0.4669 | 84.4309 | 9.7429 |
| 1.4539 | 9.0 | 945 | 0.4616 | 85.0965 | 9.749 |
| 0.144 | 10.0 | 1050 | 0.4598 | 85.3745 | 9.7522 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BonjinKim/dst_kor_bert | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | {
"architectures": [
"BertForPreTraining"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ## Common voice release generator
1. Copy the latest release id from the `RELEASES` dict in https://github.com/common-voice/common-voice/blob/main/web/src/components/pages/datasets/releases.ts
to the `VERSIONS` variable in `generate_datasets.py`.
2. Copy the languages from https://github.com/common-voice/common-voice/blob/release-v1.78.0/web/locales/en/messages.ftl
(replacing `release-v1.78.0` with the latest version tag) to the `languages.ftl` file.
3. Run `python generate_datasets.py` to generate the dataset repos.
4. `cd ..`
5. `huggingface-cli repo create --type dataset --organization mozilla-foundation common_voice_11_0`
6. `git clone https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0`
7. `cd common_voice_11_0`
8. `cp ../common_voice_generator/common_voice_11_0/* ./`
9. `git add . && git commit -m "Release" && git push` |
Bosio/full-sentence-distillroberta3-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: skin_type
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8222222328186035
---
# skin_type
Aiming for fairness in image classification for humans, knowing the skin type of subjects is relevant to make sure the model performs correctly on all skin types.
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dark skin

#### light skin
 |
Botslity/Bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
language: uk
---
# gpt2-large-wechsel-ukrainian
[`gpt2-large`](https://huggingface.co/gpt2-large) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://arxiv.org/abs/2112.065989). |
Brayan/CNN_Brain_Tumor | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7862455425369332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-mnli
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5521
- Accuracy: 0.7862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.099 | 1.0 | 24544 | 1.0987 | 0.3182 |
| 1.0993 | 2.0 | 49088 | 1.0979 | 0.3545 |
| 0.7481 | 3.0 | 73632 | 0.7197 | 0.7046 |
| 0.5671 | 4.0 | 98176 | 0.5862 | 0.7728 |
| 0.5505 | 5.0 | 122720 | 0.5521 | 0.7862 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BrunoNogueira/DialoGPT-kungfupanda | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-collapsed
metrics:
- precision
- recall
- f1
model-index:
- name: HiNER-collapsed-muril-base-cased
results:
- task:
name: Token Classification
type: token-classification
dataset:
type: cfilt/HiNER-collapsed
name: HiNER Collapsed
metrics:
- name: Precision
type: precision
value: 0.9049101352603298
- name: Recall
type: recall
value: 0.9209156735555891
- name: F1
type: f1
value: 0.9128427506027924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-collapsed-muril-base-cased
This model was trained from scratch on the cfilt/HiNER-collapsed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Bryanwong/wangchanberta-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-29T17:35:16Z | ---
license: mit
language: uk
---
# gpt2-wechsel-ukrainian
[`gpt2`](https://huggingface.co/gpt2) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://arxiv.org/abs/2112.065989). |
Bubb-les/DisloGPT-medium-HarryPotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-04-29T18:27:41Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-trainings
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-trainings
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2580
- Rouge1: 41.5251
- Rouge2: 19.8842
- Rougel: 36.4895
- Rougelsum: 37.2565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.1338 | 1.0 | 51 | 2.5825 | 35.4169 | 15.379 | 30.8859 | 31.524 |
| 2.5905 | 2.0 | 102 | 2.3975 | 38.4266 | 17.2571 | 33.5912 | 34.312 |
| 2.3881 | 3.0 | 153 | 2.3329 | 39.8082 | 19.1925 | 34.8269 | 35.5295 |
| 2.3167 | 4.0 | 204 | 2.2938 | 41.3488 | 20.1513 | 35.6879 | 36.5864 |
| 2.2357 | 5.0 | 255 | 2.2727 | 41.2457 | 19.5358 | 36.0033 | 36.8405 |
| 2.232 | 6.0 | 306 | 2.2645 | 41.2746 | 20.0345 | 35.9226 | 36.7001 |
| 2.1986 | 7.0 | 357 | 2.2595 | 41.7542 | 19.9428 | 36.6819 | 37.4718 |
| 2.1457 | 8.0 | 408 | 2.2580 | 41.5251 | 19.8842 | 36.4895 | 37.2565 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
BumBelDumBel/TRUMP | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-04-29T18:27:57Z | ---
language:
- en
datasets:
- pile-of-law/pile-of-law
pipeline_tag: fill-mask
---
# Pile of Law BERT large model 2 (uncased)
Pretrained model on English language legal and administrative text using the [RoBERTa](https://arxiv.org/abs/1907.11692) pretraining objective. This model was trained with the same setup as [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1), but with a different seed.
## Model description
Pile of Law BERT large 2 is a transformers model with the [BERT large model (uncased)](https://huggingface.co/bert-large-uncased) architecture pretrained on the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law), a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining.
## Intended uses & limitations
You can use the raw model for masked language modeling or fine-tune it for a downstream task. Since this model was pretrained on a English language legal and administrative text corpus, legal downstream tasks will likely be more in-domain for this model.
## How to use
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-2')
>>> pipe("An [MASK] is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.")
[{'sequence': 'an exception is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.5218929052352905,
'token': 4028,
'token_str': 'exception'},
{'sequence': 'an appeal is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.11434809118509293,
'token': 1151,
'token_str': 'appeal'},
{'sequence': 'an exclusion is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.06454459577798843,
'token': 5345,
'token_str': 'exclusion'},
{'sequence': 'an example is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.043593790382146835,
'token': 3677,
'token_str': 'example'},
{'sequence': 'an objection is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.03758585825562477,
'token': 3542,
'token_str': 'objection'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
model = BertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
model = TFBertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
Please see Appendix G of the Pile of Law paper for copyright limitations related to dataset and model use.
This model can have biased predictions. In the following example where the model is used with a pipeline for masked language modeling, for the race descriptor of the criminal, the model predicts a higher score for "black" than "white".
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-2')
>>> pipe("The transcript of evidence reveals that at approximately 7:30 a. m. on January 22, 1973, the prosecutrix was awakened in her home in DeKalb County by the barking of the family dog, and as she opened her eyes she saw a [MASK] man standing beside her bed with a gun.", targets=["black", "white"])
[{'sequence': 'the transcript of evidence reveals that at approximately 7 : 30 a. m. on january 22, 1973, the prosecutrix was awakened in her home in dekalb county by the barking of the family dog, and as she opened her eyes she saw a black man standing beside her bed with a gun.',
'score': 0.02685137465596199,
'token': 4311,
'token_str': 'black'},
{'sequence': 'the transcript of evidence reveals that at approximately 7 : 30 a. m. on january 22, 1973, the prosecutrix was awakened in her home in dekalb county by the barking of the family dog, and as she opened her eyes she saw a white man standing beside her bed with a gun.',
'score': 0.013632853515446186,
'token': 4249,
'token_str': 'white'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Pile of Law BERT large model was pretrained on the Pile of Law, a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining. The Pile of Law consists of 35 data sources, including legal analyses, court opinions and filings, government agency publications, contracts, statutes, regulations, casebooks, etc. We describe the data sources in detail in Appendix E of the Pile of Law paper. The Pile of Law dataset is placed under a CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license.
## Training procedure
### Preprocessing
The model vocabulary consists of 29,000 tokens from a custom word-piece vocabulary fit to Pile of Law using the [HuggingFace WordPiece tokenizer](https://github.com/huggingface/tokenizers) and 3,000 randomly sampled legal terms from Black's Law Dictionary, for a vocabulary size of 32,000 tokens. The 80-10-10 masking, corruption, leave split, as in [BERT](https://arxiv.org/abs/1810.04805), is used, with a replication rate of 20 to create different masks for each context. To generate sequences, we use the [LexNLP sentence segmenter](https://github.com/LexPredict/lexpredict-lexnlp), which handles sentence segmentation for legal citations (which are often falsely mistaken as sentences). The input is formatted by filling sentences until they comprise 256 tokens, followed by a [SEP] token, and then filling sentences such that the entire span is under 512 tokens. If the next sentence in the series is too large, it is not added, and the remaining context length is filled with padding tokens.
### Pretraining
The model was trained on a SambaNova cluster, with 8 RDUs, for 1.7 million steps. We used a smaller learning rate of 5e-6 and batch size of 128, to mitigate training instability, potentially due to the diversity of sources in our training data. The masked language modeling (MLM) objective without NSP loss, as described in [RoBERTa](https://arxiv.org/abs/1907.11692), was used for pretraining. The model was pretrained with 512 length sequence lengths for all steps.
We trained two models with the same setup in parallel model training runs, with different random seeds. We selected the lowest log likelihood model, [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1), which we refer to as PoL-BERT-Large, for experiments, but also release the second model, [pile-of-law/legalbert-large-1.7M-2](https://huggingface.co/pile-of-law/legalbert-large-1.7M-2).
## Evaluation results
See the model card for [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1) for finetuning results on the CaseHOLD variant provided by the [LexGLUE paper](https://arxiv.org/abs/2110.00976).
### BibTeX entry and citation info
```bibtex
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson, Peter and Krass, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
``` |
BumBelDumBel/ZORK_AI_FANTASY | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-04-29T20:47:02Z | ---
language:
- en
license: apache-2.0
tags:
- voxpopuli
- google/xtreme_s
- generated_from_trainer
datasets:
- google/xtreme_s
model-index:
- name: xtreme_s_xlsr_300m_voxpopuli_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_300m_voxpopuli_en
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - VOXPOPULI.EN dataset.
It achieves the following results on the evaluation set:
- Cer: 0.0966
- Loss: 0.3127
- Wer: 0.1549
- Predict Samples: 1842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.4221 | 0.19 | 500 | 1.3325 | 0.8224 | 0.3432 |
| 0.8429 | 0.38 | 1000 | 0.7087 | 0.5028 | 0.2023 |
| 0.7377 | 0.57 | 1500 | 0.4900 | 0.2778 | 0.1339 |
| 0.5641 | 0.77 | 2000 | 0.4460 | 0.2540 | 0.1284 |
| 0.5787 | 0.96 | 2500 | 0.4242 | 0.2148 | 0.1167 |
| 0.3465 | 1.15 | 3000 | 0.4210 | 0.2087 | 0.1154 |
| 0.2787 | 1.34 | 3500 | 0.3954 | 0.2090 | 0.1155 |
| 0.2775 | 1.53 | 4000 | 0.3938 | 0.1992 | 0.1133 |
| 0.262 | 1.72 | 4500 | 0.3748 | 0.2104 | 0.1151 |
| 0.3138 | 1.92 | 5000 | 0.3825 | 0.1993 | 0.1134 |
| 0.4331 | 2.11 | 5500 | 0.3648 | 0.1935 | 0.1104 |
| 0.3802 | 2.3 | 6000 | 0.3966 | 0.1910 | 0.1109 |
| 0.3928 | 2.49 | 6500 | 0.3995 | 0.1898 | 0.1100 |
| 0.3441 | 2.68 | 7000 | 0.3764 | 0.1887 | 0.1103 |
| 0.3673 | 2.87 | 7500 | 0.3800 | 0.1843 | 0.1086 |
| 0.3422 | 3.07 | 8000 | 0.3932 | 0.1830 | 0.1092 |
| 0.2933 | 3.26 | 8500 | 0.3672 | 0.1915 | 0.1104 |
| 0.1785 | 3.45 | 9000 | 0.3820 | 0.1796 | 0.1072 |
| 0.321 | 3.64 | 9500 | 0.3533 | 0.1994 | 0.1126 |
| 0.1673 | 3.83 | 10000 | 0.3683 | 0.1856 | 0.1084 |
| 0.1757 | 4.02 | 10500 | 0.3365 | 0.1925 | 0.1102 |
| 0.1881 | 4.22 | 11000 | 0.3528 | 0.1775 | 0.1066 |
| 0.3106 | 4.41 | 11500 | 0.3909 | 0.1754 | 0.1063 |
| 0.25 | 4.6 | 12000 | 0.3734 | 0.1723 | 0.1052 |
| 0.2005 | 4.79 | 12500 | 0.3358 | 0.1900 | 0.1092 |
| 0.2982 | 4.98 | 13000 | 0.3513 | 0.1766 | 0.1060 |
| 0.1552 | 5.17 | 13500 | 0.3720 | 0.1729 | 0.1059 |
| 0.1645 | 5.37 | 14000 | 0.3569 | 0.1713 | 0.1044 |
| 0.2065 | 5.56 | 14500 | 0.3639 | 0.1720 | 0.1048 |
| 0.1898 | 5.75 | 15000 | 0.3660 | 0.1726 | 0.1050 |
| 0.1397 | 5.94 | 15500 | 0.3731 | 0.1670 | 0.1033 |
| 0.2056 | 6.13 | 16000 | 0.3782 | 0.1650 | 0.1030 |
| 0.1859 | 6.32 | 16500 | 0.3903 | 0.1667 | 0.1033 |
| 0.1374 | 6.52 | 17000 | 0.3721 | 0.1736 | 0.1048 |
| 0.2482 | 6.71 | 17500 | 0.3899 | 0.1643 | 0.1023 |
| 0.159 | 6.9 | 18000 | 0.3847 | 0.1687 | 0.1032 |
| 0.1487 | 7.09 | 18500 | 0.3817 | 0.1671 | 0.1030 |
| 0.1942 | 7.28 | 19000 | 0.4120 | 0.1616 | 0.1018 |
| 0.1517 | 7.47 | 19500 | 0.3856 | 0.1635 | 0.1020 |
| 0.0946 | 7.67 | 20000 | 0.3838 | 0.1621 | 0.1016 |
| 0.1455 | 7.86 | 20500 | 0.3749 | 0.1652 | 0.1020 |
| 0.1303 | 8.05 | 21000 | 0.4074 | 0.1615 | 0.1011 |
| 0.1207 | 8.24 | 21500 | 0.4121 | 0.1606 | 0.1008 |
| 0.0727 | 8.43 | 22000 | 0.3948 | 0.1607 | 0.1009 |
| 0.1123 | 8.62 | 22500 | 0.4025 | 0.1603 | 0.1009 |
| 0.1606 | 8.82 | 23000 | 0.3963 | 0.1580 | 0.1004 |
| 0.1458 | 9.01 | 23500 | 0.3991 | 0.1574 | 0.1002 |
| 0.2286 | 9.2 | 24000 | 0.4149 | 0.1596 | 0.1009 |
| 0.1284 | 9.39 | 24500 | 0.4251 | 0.1572 | 0.1002 |
| 0.1141 | 9.58 | 25000 | 0.4264 | 0.1579 | 0.1002 |
| 0.1823 | 9.77 | 25500 | 0.4230 | 0.1562 | 0.0999 |
| 0.2514 | 9.97 | 26000 | 0.4242 | 0.1564 | 0.0999 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.1+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
BumBelDumBel/ZORK_AI_SCIFI | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m).
It achieves the following results on the evaluation set:
- Loss: 0.5285
- Wer: 0.1702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.9618 | 0.74 | 32 | 15.0745 | 1.0 |
| 9.1928 | 1.49 | 64 | 5.9361 | 1.0 |
| 4.9307 | 2.23 | 96 | 4.2924 | 1.0 |
| 3.8917 | 2.98 | 128 | 3.5873 | 1.0 |
| 3.3867 | 3.72 | 160 | 3.2594 | 1.0 |
| 3.2107 | 4.47 | 192 | 3.1718 | 1.0 |
| 3.1395 | 5.21 | 224 | 3.1281 | 1.0 |
| 3.115 | 5.95 | 256 | 3.1238 | 1.0 |
| 3.0801 | 6.7 | 288 | 3.0674 | 1.0 |
| 2.9725 | 7.44 | 320 | 2.8277 | 1.0 |
| 2.4159 | 8.19 | 352 | 1.7186 | 0.9036 |
| 1.3377 | 8.93 | 384 | 1.0271 | 0.6433 |
| 0.8591 | 9.67 | 416 | 0.8087 | 0.5441 |
| 0.726 | 10.42 | 448 | 0.7263 | 0.4634 |
| 0.6242 | 11.16 | 480 | 0.6783 | 0.4156 |
| 0.5417 | 11.91 | 512 | 0.6611 | 0.4305 |
| 0.4784 | 12.65 | 544 | 0.6300 | 0.3926 |
| 0.4198 | 13.4 | 576 | 0.5646 | 0.3499 |
| 0.3798 | 14.14 | 608 | 0.5919 | 0.3229 |
| 0.3356 | 14.88 | 640 | 0.5715 | 0.3369 |
| 0.2954 | 15.63 | 672 | 0.5325 | 0.2728 |
| 0.264 | 16.37 | 704 | 0.5535 | 0.2689 |
| 0.2535 | 17.12 | 736 | 0.5467 | 0.2366 |
| 0.2277 | 17.86 | 768 | 0.5219 | 0.2345 |
| 0.2141 | 18.6 | 800 | 0.5314 | 0.2487 |
| 0.2036 | 19.35 | 832 | 0.5382 | 0.2236 |
| 0.2021 | 20.09 | 864 | 0.5038 | 0.1922 |
| 0.1676 | 20.84 | 896 | 0.5238 | 0.2033 |
| 0.1544 | 21.58 | 928 | 0.5069 | 0.1866 |
| 0.1512 | 22.33 | 960 | 0.5045 | 0.1965 |
| 0.1512 | 23.07 | 992 | 0.5167 | 0.1862 |
| 0.1399 | 23.81 | 1024 | 0.5236 | 0.1840 |
| 0.1291 | 24.56 | 1056 | 0.5234 | 0.1957 |
| 0.1274 | 25.3 | 1088 | 0.5348 | 0.1943 |
| 0.127 | 26.05 | 1120 | 0.4978 | 0.1719 |
| 0.1105 | 26.79 | 1152 | 0.5067 | 0.1767 |
| 0.1069 | 27.53 | 1184 | 0.5150 | 0.1758 |
| 0.1058 | 28.28 | 1216 | 0.5218 | 0.1844 |
| 0.0999 | 29.02 | 1248 | 0.5375 | 0.1852 |
| 0.0964 | 29.77 | 1280 | 0.5373 | 0.1843 |
| 0.0971 | 30.51 | 1312 | 0.5190 | 0.1776 |
| 0.0906 | 31.26 | 1344 | 0.5217 | 0.1747 |
| 0.0909 | 32.0 | 1376 | 0.5204 | 0.1778 |
| 0.0784 | 32.74 | 1408 | 0.5336 | 0.1756 |
| 0.0823 | 33.49 | 1440 | 0.5281 | 0.1699 |
| 0.0834 | 34.23 | 1472 | 0.5292 | 0.1700 |
| 0.0827 | 34.98 | 1504 | 0.5285 | 0.1702 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CALM/CALM | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
CALM/backup | [
"lean_albert",
"transformers"
] | null | {
"architectures": [
"LeanAlbertForPretraining",
"LeanAlbertForTokenClassification",
"LeanAlbertForSequenceClassification"
],
"model_type": "lean_albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-04-29T20:54:34Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 85 | 2022-04-29T21:00:32Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.