modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ali2066/finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27
|
ali2066
| 2022-02-16T19:47:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_0.0002_all_16_02_2022-20_45_27
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1500
- Precision: 0.4739
- Recall: 0.5250
- F1: 0.4981
- Accuracy: 0.9551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3183 | 0.2024 | 0.2909 | 0.2387 | 0.8499 |
| No log | 2.0 | 76 | 0.3092 | 0.2909 | 0.4181 | 0.3431 | 0.8548 |
| No log | 3.0 | 114 | 0.2928 | 0.2923 | 0.4855 | 0.3650 | 0.8647 |
| No log | 4.0 | 152 | 0.3098 | 0.2832 | 0.4605 | 0.3507 | 0.8641 |
| No log | 5.0 | 190 | 0.3120 | 0.2470 | 0.4374 | 0.3157 | 0.8654 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
arampacha/wav2vec2-xls-r-300m-hy-cv
|
arampacha
| 2022-02-16T19:45:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hy",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- hy
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Wer: 0.6569
**Note**: If you aim for best performance use [this model](https://huggingface.co/arampacha/wav2vec2-xls-r-300m-hy). It is trained using noizy student procedure and achieves considerably better results.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 9.167 | 16.67 | 100 | 3.5599 | 1.0 |
| 3.2645 | 33.33 | 200 | 3.1771 | 1.0 |
| 3.1509 | 50.0 | 300 | 3.1321 | 1.0 |
| 3.0757 | 66.67 | 400 | 2.8594 | 1.0 |
| 2.5274 | 83.33 | 500 | 1.5286 | 0.9797 |
| 1.6826 | 100.0 | 600 | 0.8058 | 0.7974 |
| 1.2868 | 116.67 | 700 | 0.6713 | 0.7279 |
| 1.1262 | 133.33 | 800 | 0.6308 | 0.7034 |
| 1.0408 | 150.0 | 900 | 0.6056 | 0.6745 |
| 0.9617 | 166.67 | 1000 | 0.5891 | 0.6569 |
| 0.9196 | 183.33 | 1100 | 0.5913 | 0.6432 |
| 0.8853 | 200.0 | 1200 | 0.5924 | 0.6347 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ali2066/finetuned_token_itr0_3e-05_all_16_02_2022-20_43_00
|
ali2066
| 2022-02-16T19:45:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_all_16_02_2022-20_43_00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_all_16_02_2022-20_43_00
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Precision: 0.3811
- Recall: 0.3865
- F1: 0.3838
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3697 | 0.0933 | 0.2235 | 0.1317 | 0.8259 |
| No log | 2.0 | 76 | 0.3193 | 0.1266 | 0.2948 | 0.1771 | 0.8494 |
| No log | 3.0 | 114 | 0.3025 | 0.1606 | 0.3160 | 0.2130 | 0.8540 |
| No log | 4.0 | 152 | 0.2978 | 0.1867 | 0.3449 | 0.2422 | 0.8605 |
| No log | 5.0 | 190 | 0.2984 | 0.1706 | 0.3507 | 0.2295 | 0.8551 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-20_40_28
|
ali2066
| 2022-02-16T19:42:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_all_16_02_2022-20_40_28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_all_16_02_2022-20_40_28
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1736
- Precision: 0.3358
- Recall: 0.3447
- F1: 0.3402
- Accuracy: 0.9452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3058 | 0.1200 | 0.2102 | 0.1528 | 0.8629 |
| No log | 2.0 | 76 | 0.2488 | 0.1605 | 0.2774 | 0.2034 | 0.9003 |
| No log | 3.0 | 114 | 0.2296 | 0.1947 | 0.2880 | 0.2324 | 0.9057 |
| No log | 4.0 | 152 | 0.2208 | 0.2201 | 0.2986 | 0.2534 | 0.9113 |
| No log | 5.0 | 190 | 0.2235 | 0.2110 | 0.3039 | 0.2491 | 0.9101 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06
|
ali2066
| 2022-02-16T19:27:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_all_16_02_2022-20_25_06
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1778
- Precision: 0.3270
- Recall: 0.3348
- F1: 0.3309
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.4023 | 0.1050 | 0.2331 | 0.1448 | 0.8121 |
| No log | 2.0 | 76 | 0.3629 | 0.1856 | 0.3414 | 0.2405 | 0.8368 |
| No log | 3.0 | 114 | 0.3329 | 0.1794 | 0.3594 | 0.2394 | 0.8504 |
| No log | 4.0 | 152 | 0.3261 | 0.1786 | 0.3684 | 0.2405 | 0.8503 |
| No log | 5.0 | 190 | 0.3244 | 0.1872 | 0.3684 | 0.2482 | 0.8534 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
|
ali2066
| 2022-02-16T19:14:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.3509
- Recall: 0.3793
- F1: 0.3646
- Accuracy: 0.9468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.2997 | 0.1125 | 0.2057 | 0.1454 | 0.8669 |
| No log | 2.0 | 76 | 0.2620 | 0.1928 | 0.2849 | 0.2300 | 0.8899 |
| No log | 3.0 | 114 | 0.2497 | 0.1923 | 0.2906 | 0.2314 | 0.8918 |
| No log | 4.0 | 152 | 0.2474 | 0.1819 | 0.3377 | 0.2365 | 0.8905 |
| No log | 5.0 | 190 | 0.2418 | 0.2128 | 0.3264 | 0.2576 | 0.8997 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_itr0_2e-05_all_16_02_2022-20_09_36
|
ali2066
| 2022-02-16T19:11:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_itr0_2e-05_all_16_02_2022-20_09_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_itr0_2e-05_all_16_02_2022-20_09_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1743
- Precision: 0.3429
- Recall: 0.3430
- F1: 0.3430
- Accuracy: 0.9446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3322 | 0.0703 | 0.1790 | 0.1010 | 0.8318 |
| No log | 2.0 | 76 | 0.2644 | 0.1180 | 0.2343 | 0.1570 | 0.8909 |
| No log | 3.0 | 114 | 0.2457 | 0.1624 | 0.2583 | 0.1994 | 0.8980 |
| No log | 4.0 | 152 | 0.2487 | 0.1486 | 0.2583 | 0.1887 | 0.8931 |
| No log | 5.0 | 190 | 0.2395 | 0.1670 | 0.2694 | 0.2062 | 0.8988 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_50k_2eps
|
joe5campbell
| 2022-02-16T17:04:00Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_50k_2eps
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_50k_2eps
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1131
- Train Accuracy: 0.9596
- Validation Loss: 0.6972
- Validation Accuracy: 0.8229
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3420 | 0.8511 | 0.4293 | 0.8299 | 0 |
| 0.1131 | 0.9596 | 0.6972 | 0.8229 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
Harveenchadha/model-entailment
|
Harveenchadha
| 2022-02-16T16:10:23Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"nlp",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
tags:
- nlp
library_name: keras
---
## Multimodal entailment
Author: Sayak Paul
Date created: 2021/08/08
Last modified: 2021/08/15
Description: Training a multimodal model for predicting entailment.
### What is multimodal entailment?
On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time:
Does a given piece of information contradict the other?
Does a given piece of information imply the other?
In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities.
|
chaitanya97/wav2vec2-large-xls-r-3
|
chaitanya97
| 2022-02-16T16:03:48Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_29_13
|
ali2066
| 2022-02-16T15:32:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_29_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_29_13
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_25_56
|
ali2066
| 2022-02-16T15:29:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_25_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_25_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_19_24
|
ali2066
| 2022-02-16T15:22:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_19_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_19_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_16_08
|
ali2066
| 2022-02-16T15:19:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_16_08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_16_08
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_3e-05_all_16_02_2022-16_09_36
|
ali2066
| 2022-02-16T15:12:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_3e-05_all_16_02_2022-16_09_36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_3e-05_all_16_02_2022-16_09_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1630
- Precision: 0.3684
- Recall: 0.3714
- F1: 0.3699
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3339 | 0.1075 | 0.2324 | 0.1470 | 0.8379 |
| No log | 2.0 | 76 | 0.3074 | 0.1589 | 0.2926 | 0.2060 | 0.8489 |
| No log | 3.0 | 114 | 0.2914 | 0.2142 | 0.3278 | 0.2591 | 0.8591 |
| No log | 4.0 | 152 | 0.2983 | 0.1951 | 0.3595 | 0.2529 | 0.8454 |
| No log | 5.0 | 190 | 0.2997 | 0.1851 | 0.3528 | 0.2428 | 0.8487 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_59_50
|
ali2066
| 2022-02-16T15:03:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_59_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_59_50
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_56_33
|
ali2066
| 2022-02-16T14:59:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_56_33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_56_33
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
philschmid/distilbert-onnx
|
philschmid
| 2022-02-16T14:51:05Z | 57,058 | 2 |
transformers
|
[
"transformers",
"onnx",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language: "en"
datasets:
- squad
metrics:
- squad
license: apache-2.0
---
# ONNX Conversion of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad)
# DistilBERT base cased distilled SQuAD
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_48_32
|
ali2066
| 2022-02-16T14:50:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_48_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_48_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1750
- Precision: 0.3286
- Recall: 0.3334
- F1: 0.3310
- Accuracy: 0.9447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 |
| No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 |
| No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 |
| No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 |
| No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_all_16_02_2022-15_41_15
|
ali2066
| 2022-02-16T14:43:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_all_16_02_2022-15_41_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_all_16_02_2022-15_41_15
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- Precision: 0.3447
- Recall: 0.3410
- F1: 0.3428
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3692 | 0.0868 | 0.2030 | 0.1216 | 0.8238 |
| No log | 2.0 | 76 | 0.3198 | 0.1674 | 0.3029 | 0.2157 | 0.8567 |
| No log | 3.0 | 114 | 0.3156 | 0.1520 | 0.3096 | 0.2039 | 0.8510 |
| No log | 4.0 | 152 | 0.3129 | 0.1753 | 0.3266 | 0.2281 | 0.8500 |
| No log | 5.0 | 190 | 0.3038 | 0.1716 | 0.3401 | 0.2281 | 0.8595 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
marcopost-it/biobert-it
|
marcopost-it
| 2022-02-16T14:15:27Z | 153 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
Hi!
This model has been trained on Italian biomedical data.
For further information, do not hesitate to send me a message! ;)
[email protected] (Marco Postiglione)
|
ali2066/finetuned_token_2e-05_16_02_2022-14_32_56
|
ali2066
| 2022-02-16T13:35:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_32_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_32_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_25_47
|
ali2066
| 2022-02-16T13:28:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_25_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_25_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_23_23
|
ali2066
| 2022-02-16T13:25:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_23_23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_23_23
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_18_19
|
ali2066
| 2022-02-16T13:20:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_18_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_18_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-14_15_41
|
ali2066
| 2022-02-16T13:18:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_15_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_15_41
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1746
- Precision: 0.3191
- Recall: 0.3382
- F1: 0.3284
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.2908 | 0.1104 | 0.1905 | 0.1398 | 0.8731 |
| No log | 2.0 | 76 | 0.2253 | 0.1682 | 0.3206 | 0.2206 | 0.9114 |
| No log | 3.0 | 114 | 0.2041 | 0.2069 | 0.3444 | 0.2585 | 0.9249 |
| No log | 4.0 | 152 | 0.1974 | 0.2417 | 0.3603 | 0.2894 | 0.9269 |
| No log | 5.0 | 190 | 0.1958 | 0.2707 | 0.3683 | 0.3120 | 0.9299 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
chaitanya97/wav2vec2-large-xls-r-300m-turkish-colab
|
chaitanya97
| 2022-02-16T10:38:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 33.1265
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 21.4247 | 4.0 | 4 | 33.1265 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_100_2epochs
|
joe5campbell
| 2022-02-16T10:34:00Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_100_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100_2epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6279
- Train Accuracy: 0.6824
- Validation Loss: 0.7791
- Validation Accuracy: 0.2667
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7045 | 0.4882 | 0.7236 | 0.2667 | 0 |
| 0.6279 | 0.6824 | 0.7791 | 0.2667 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
premrawat/en_ner_model
|
premrawat
| 2022-02-16T09:23:12Z | 6 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_model
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3624161074
- name: NER Recall
type: recall
value: 0.384341637
- name: NER F Score
type: f_score
value: 0.3730569948
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_model` |
| **Version** | `0.1.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 37.31 |
| `ENTS_P` | 36.24 |
| `ENTS_R` | 38.43 |
| `TOK2VEC_LOSS` | 305790.85 |
| `NER_LOSS` | 801195.82 |
|
premrawat/en_ner_skills
|
premrawat
| 2022-02-16T09:14:23Z | 6 | 5 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_skills
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3980582524
- name: NER Recall
type: recall
value: 0.3404507711
- name: NER F Score
type: f_score
value: 0.3670076726
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_skills` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 36.70 |
| `ENTS_P` | 39.81 |
| `ENTS_R` | 34.05 |
| `TOK2VEC_LOSS` | 607659.90 |
| `NER_LOSS` | 491709.76 |
|
jatinshah/bert-finetuned-ner
|
jatinshah
| 2022-02-16T03:50:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330024813895782
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9410194377242012
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9330
- Recall: 0.9492
- F1: 0.9410
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0647 | 0.9147 | 0.9345 | 0.9245 | 0.9826 |
| 0.0305 | 2.0 | 3512 | 0.0599 | 0.9333 | 0.9463 | 0.9398 | 0.9858 |
| 0.0212 | 3.0 | 5268 | 0.0599 | 0.9330 | 0.9492 | 0.9410 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jkang/espnet2_librispeech_100_conformer
|
jkang
| 2022-02-16T01:05:55Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- librispeech_100
license: cc-by-4.0
---
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer`
- This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
- Gradio Demo: [🤗 ESPNet2 ASR Librispeech Conformer](https://huggingface.co/spaces/jkang/espnet2_asr_librispeech_100h)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 140704c146f8beeed74973f5258379f6133dcdfb
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Feb 11 01:42:52 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `140704c146f8beeed74973f5258379f6133dcdfb`
- Commit date: `Tue Feb 8 16:06:02 2022 -0500`
- GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|94.5|5.1|0.4|0.7|6.3|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|84.8|13.7|1.5|2.1|17.3|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|94.2|5.3|0.5|0.8|6.6|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|84.7|13.8|1.5|2.0|17.3|81.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|98.2|1.1|0.8|0.7|2.5|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|93.3|4.1|2.6|2.0|8.7|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|98.0|1.1|0.9|0.7|2.7|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|93.5|4.0|2.5|1.9|8.4|81.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|69558|92.0|5.0|3.0|0.7|8.7|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|64524|81.3|13.2|5.4|2.4|21.1|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|66983|91.8|5.1|3.1|0.6|8.8|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|66650|81.2|13.1|5.7|2.1|20.9|81.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ED
- ▁I
- ▁HE
- ▁WAS
- ▁THAT
- ING
- ▁IT
- ''''
- ▁HIS
- ▁HAD
- ▁WITH
- ▁YOU
- ▁FOR
- T
- ▁AS
- ▁HER
- LY
- ▁NOT
- ▁BUT
- ▁SHE
- ▁BE
- D
- E
- ▁IS
- ▁AT
- ▁ON
- ▁HIM
- ▁THEY
- ▁BY
- ▁HAVE
- Y
- ▁MY
- ▁SO
- ▁ALL
- ▁THIS
- ▁WERE
- ▁WHICH
- ▁ME
- ▁FROM
- ▁ONE
- ▁SAID
- ▁WE
- N
- ER
- ▁NO
- ▁THERE
- ▁WHEN
- ▁AN
- ▁THEIR
- ▁OR
- ▁WOULD
- ▁WHO
- ▁THEM
- R
- ▁IF
- ▁WHAT
- ▁ARE
- ▁BEEN
- ▁OUT
- ▁UP
- M
- ▁WILL
- ▁DO
- ▁MAN
- ▁COULD
- C
- ▁THEN
- ▁INTO
- ▁MORE
- ▁SOME
- ES
- P
- ▁VERY
- ▁NOW
- ▁YOUR
- ▁LITTLE
- ▁TIME
- ▁ABOUT
- ▁DID
- ▁THAN
- ▁LIKE
- ▁HAS
- L
- G
- AL
- IN
- ▁UPON
- ▁CAN
- ▁WELL
- ▁OTHER
- ▁OVER
- US
- ▁TWO
- ▁ONLY
- ▁ANY
- ▁OUR
- O
- EN
- RE
- ▁MADE
- U
- ▁AFTER
- ▁SEE
- ▁S
- ▁DOWN
- ▁BEFORE
- LL
- ST
- B
- ▁OLD
- ▁DAY
- ▁MISS
- ▁GREAT
- ▁US
- ▁KNOW
- OR
- ▁SUCH
- ▁GOOD
- ▁WAY
- A
- ▁THESE
- ▁CAME
- ▁UN
- ▁SHOULD
- ▁HOW
- ▁MISTER
- ▁GO
- ▁MUCH
- ▁WHERE
- ▁MUST
- ▁NEVER
- ▁COME
- ▁BACK
- ION
- 'ON'
- ▁LONG
- F
- ▁AGAIN
- ▁FIRST
- LE
- ▁MEN
- ▁EVEN
- NESS
- ▁MIGHT
- ▁OWN
- ▁MAY
- K
- ▁HIMSELF
- ▁SAY
- ▁JUST
- ▁THROUGH
- ▁RE
- ▁AM
- ▁ITS
- ▁WENT
- ▁THOUGHT
- ▁
- ▁DE
- ▁MAKE
- I
- ▁HAND
- ▁THINK
- ▁HOUSE
- ▁HERE
- IC
- H
- ATION
- ▁LIFE
- IT
- ▁EYES
- ▁MOST
- ▁WITHOUT
- ▁TOO
- ▁THOSE
- ABLE
- ▁EVERY
- ▁DON
- ▁MANY
- ▁AWAY
- ITY
- VE
- W
- ▁STILL
- ▁BEING
- ▁C
- ▁LAST
- ▁NIGHT
- ▁O
- ▁HEAD
- AN
- ▁FOUND
- ▁NOTHING
- ▁YOUNG
- ▁WHILE
- ▁TAKE
- ▁GET
- ▁PEOPLE
- RO
- ▁OFF
- ▁THOUGH
- EST
- ▁YET
- ▁THREE
- TH
- ▁RIGHT
- ▁UNDER
- AR
- ▁FACE
- IES
- ▁ROOM
- ▁NEW
- ▁SAW
- RA
- V
- ▁ASKED
- ▁TELL
- ERS
- ▁SAME
- MENT
- ▁HEART
- LESS
- ▁WORK
- ▁PLACE
- ▁ANOTHER
- ▁EVER
- ▁LEFT
- ▁SHALL
- ▁FATHER
- ▁PUT
- ▁ONCE
- ▁TOOK
- ▁LET
- ▁ALWAYS
- ▁SEEMED
- ▁PART
- IL
- UR
- ▁WHY
- ▁TOLD
- ▁GIVE
- ▁LOVE
- CE
- ▁MIND
- ▁LOOKED
- ▁HEARD
- ▁SOON
- ▁LOOK
- ▁MOTHER
- ▁FAR
- IVE
- ▁BECAUSE
- ▁HOME
- OUS
- ▁T
- EL
- ▁D
- ▁SOMETHING
- ▁SIDE
- ▁KING
- IS
- ATE
- ▁MOMENT
- ENT
- RY
- ▁THINGS
- ▁ST
- ▁LIGHT
- ▁FIND
- ▁GOING
- ▁THING
- ▁WORLD
- IR
- AT
- ▁WATER
- ▁END
- ▁DOOR
- ISH
- ▁KNEW
- ▁WOMAN
- ▁SIR
- ▁EACH
- RI
- ▁HAVING
- ▁AGAINST
- ▁FEW
- ▁E
- ▁BEGAN
- ▁BETTER
- ▁YES
- ▁NAME
- ▁ENOUGH
- ET
- ▁HARD
- ▁VOICE
- ▁YEARS
- ▁GOT
- ▁WHOLE
- ▁WHITE
- ▁WANT
- ▁GIRL
- ▁DONE
- ▁SEEN
- ▁HUNDRED
- ▁CALLED
- ▁BETWEEN
- ▁MORNING
- FUL
- AS
- ▁FELT
- TER
- ▁KIND
- X
- CH
- ▁HERSELF
- ANT
- ▁TOWARD
- ▁HALF
- ▁OH
- ▁AMONG
- ▁HOWEVER
- ▁TURNED
- ▁ALSO
- ▁BOTH
- ▁POOR
- ▁PERHAPS
- ▁REPLIED
- ▁COURSE
- UL
- ▁QUITE
- ▁REST
- ▁DOES
- ▁MYSELF
- NG
- LO
- ANCE
- ▁MA
- ▁SET
- ▁SMALL
- ▁B
- ▁SURE
- ▁F
- ▁GAVE
- ▁PRESENT
- ▁HIGH
- ▁ALMO
- ▁R
- CK
- ▁WHOM
- ▁NEAR
- ▁CARE
- ▁WAR
- ▁GOD
- ▁TOGETHER
- ▁SAT
- ▁SHOW
- TE
- NE
- ▁BEST
- ▁UNTIL
- ▁OPEN
- ▁W
- ▁FOUR
- ▁DEAR
- ▁HANDS
- ▁WORDS
- ▁SINCE
- ▁LAND
- ▁DIS
- MAN
- ▁ANYTHING
- ▁FEET
- ▁NEXT
- ▁GENERAL
- LING
- ▁LAY
- ▁NOR
- ▁STOOD
- ▁BLACK
- ▁POWER
- ▁BROUGHT
- Z
- IE
- ▁ROUND
- ▁BELIEVE
- ▁LARGE
- ▁ALONG
- ▁HELP
- ▁DAYS
- ▁FIVE
- ▁K
- ▁HOPE
- AM
- ▁CO
- ▁KEEP
- ▁FULL
- ▁WALK
- ▁MASTER
- ATED
- ▁NATURE
- ▁JOHN
- ▁POINT
- ▁DUR
- ▁MATTER
- ▁MONEY
- ▁CHILD
- ▁LOOKING
- ▁RATHER
- ▁AIR
- IA
- ▁P
- ▁TWENTY
- ▁FIRE
- OL
- ▁LESS
- ▁SHORT
- ▁PASSED
- ▁INDEED
- TY
- ▁CASE
- ▁WORD
- ▁WISH
- ▁COUNTRY
- LED
- ID
- ▁BOY
- ▁SOUND
- ▁FORM
- ▁CRIED
- LA
- ▁FRIEND
- TON
- ▁FACT
- ▁UNCLE
- ▁TAKEN
- ▁AL
- ▁TEN
- IAN
- ▁GONE
- ▁SEA
- ▁REASON
- TING
- ▁WHOSE
- ▁OTHERS
- AC
- ▁LI
- ▁DEATH
- ▁CERTAIN
- ▁ANSWERED
- ▁THEMSELVES
- ▁LADY
- ▁STATE
- ▁CAR
- ▁WIFE
- ▁THOUSAND
- ▁TRUE
- ▁BEHIND
- AGE
- ▁DOCTOR
- ▁FEAR
- ▁OFTEN
- OM
- ▁TILL
- ▁HA
- IOUS
- ▁AROUND
- IST
- ▁SENT
- ▁SPEAK
- ▁WOMEN
- ▁GROUND
- VER
- ENCE
- NA
- ▁TALK
- ▁CHILDREN
- TION
- CO
- MO
- ▁HEAR
- ▁ORDER
- ▁LEAVE
- ▁PRO
- ▁ALREADY
- ▁LA
- ▁FINE
- SE
- ▁BA
- PP
- ▁THUS
- AD
- ▁NEED
- ▁SIGHT
- ▁CALL
- ▁FELL
- ▁MANNER
- MP
- ▁BECAME
- UM
- ▁WATCH
- OW
- ▁FOOT
- ▁CANNOT
- ▁BODY
- ▁TOWN
- ▁LIVE
- INE
- ▁RETURNED
- ▁WONDER
- MA
- ▁G
- UT
- ▁CLOSE
- UN
- IM
- ▁ALONE
- ▁DIDN
- ▁LORD
- ▁RED
- ARY
- ▁GIVEN
- ▁SIX
- ▁EVERYTHING
- ▁DARK
- ▁DEAD
- ▁STRONG
- ▁SON
- ▁COMING
- URE
- ▁HELD
- ▁ABOVE
- ▁REALLY
- ▁BEAUTIFUL
- ▁SECOND
- ARD
- ▁EVENING
- ▁CON
- ▁HOUR
- ▁FELLOW
- ▁ROSE
- ▁PERSON
- ▁EX
- ▁CH
- ▁FORCE
- ▁MO
- ▁ARM
- ▁CAUSE
- ▁TURN
- ▁CITY
- ▁DOUBT
- ▁QUESTION
- TIC
- ▁DEEP
- ▁HAIR
- ICAL
- ▁MEAN
- ▁DI
- ▁CLEAR
- ▁SOMETIMES
- ▁STRANGE
- ▁FEEL
- ▁HO
- ▁IMP
- WARD
- AUGHT
- ▁CAPTAIN
- ▁USE
- ▁UNDERSTAND
- ▁KEPT
- ▁BR
- ▁WOOD
- ▁PRE
- ▁YEAR
- ▁TI
- ▁LEAST
- ▁BED
- ▁SA
- ▁TABLE
- ▁BECOME
- ▁FREE
- ▁FAMILY
- ME
- ▁EYE
- ▁WHETHER
- ▁MAKING
- ▁WITHIN
- ▁SORT
- ▁ANSWER
- ▁PO
- ▁SAYS
- ▁EARTH
- ▁RETURN
- ▁SUDDENLY
- ▁FRIENDS
- ▁GREEN
- ▁SUN
- ▁FAIR
- ▁TH
- ▁FALL
- ▁EITHER
- ▁BO
- ▁PRINCE
- ▁THOU
- ▁ITSELF
- ▁CHURCH
- ▁BIG
- ▁ABLE
- ▁DIFFERENT
- ▁SEVERAL
- ▁DAUGHTER
- ▁WON
- ▁WIND
- ▁BAD
- ▁LOST
- ▁READ
- ▁STORY
- ▁APPEARED
- DE
- ▁NUMBER
- ▁SP
- ▁LOW
- ▁ROAD
- ▁POSSIBLE
- ▁HUMAN
- ▁RIVER
- ▁STREET
- ▁GA
- ▁COLD
- ▁MET
- ▁ACT
- ▁BROTHER
- ▁AGE
- ▁KNOWN
- ▁CONTINUED
- ▁BRING
- ▁ILL
- ▁RUN
- ▁LAW
- ▁SUBJECT
- ▁CUT
- J
- PER
- ▁PA
- ▁TROUBLE
- ▁GLAD
- HE
- ▁SLEEP
- MEN
- ▁LATE
- ▁MEANS
- ▁ASK
- ▁REACHED
- ▁RAN
- AK
- ▁HORSE
- ▁USED
- WAY
- OP
- ▁WINDOW
- ▁SNOW
- ▁PAST
- ▁OBJECT
- ▁THEREFORE
- IONS
- ▁TREE
- ▁COMP
- ▁BLUE
- CA
- ▁VI
- ▁SIGN
- ▁EIGHTEEN
- ▁GARDEN
- ▁BUSINESS
- ▁PETER
- ▁FOLLOWED
- ▁SEEM
- ▁HOLD
- ▁HAPPY
- ▁LONGER
- ▁ACROSS
- ▁BU
- BE
- ▁ELSE
- ▁PLAY
- ▁SOUL
- ▁STAND
- ▁ARMS
- ▁SCHOOL
- ▁PRINCESS
- ▁CERTAINLY
- LT
- ▁ENGLISH
- ▁SEVEN
- ▁PER
- ▁IDEA
- ▁LE
- ▁BOOK
- ▁FEELING
- ▁HUSBAND
- ▁LINE
- PT
- THOUGH
- ▁OUGHT
- ▁RICH
- IP
- ▁VIEW
- ▁DREAM
- ▁SENSE
- ▁LO
- ▁READY
- ▁CARRIED
- ▁M
- ▁REGARD
- ▁CHANCE
- ▁WANTED
- ▁LIVED
- ▁LATER
- ▁INTEREST
- ▁EN
- ▁EFFECT
- ▁CLA
- ▁CHANGE
- ▁CA
- ▁REAL
- ▁SUPPOSE
- LES
- ▁ART
- ▁TIMES
- ▁MAR
- IF
- ▁WILD
- ▁ADDED
- ▁LETTER
- IAL
- ▁THANK
- ▁PARTY
- LAND
- ▁PAY
- ▁BREATH
- ▁TAKING
- ▁COURT
- ▁COUNT
- ILY
- ▁COMMON
- ▁PUBLIC
- ▁PURPOSE
- ▁PRETTY
- ▁TRUTH
- ▁STAY
- ▁EM
- NT
- ▁SH
- ▁REMEMBER
- ▁ENTERED
- ▁RECEIVED
- RED
- ▁SPOKE
- ▁USUAL
- ▁THY
- ▁FIGURE
- ▁LED
- ▁TREES
- ▁TRIED
- ▁FORWARD
- NED
- ▁HAT
- ▁BLOOD
- ▁BEYOND
- ▁BANK
- ▁LIVING
- ▁JOY
- ▁HOURS
- ▁ENGLAND
- ▁STONE
- VI
- GE
- ▁SWEET
- ▁POSITION
- ▁FRONT
- ▁GIRLS
- ▁VISIT
- ▁CHARACTER
- ▁SPIRIT
- ▁TA
- BO
- QUE
- QUI
- ▁OPENED
- ▁OCCASION
- ▁MEET
- ▁EIGHT
- ▁REMAIN
- ▁PASS
- TO
- ▁NORTH
- ▁SERVICE
- ▁SISTER
- ▁SE
- ▁BEAR
- ▁PLEASURE
- ▁CHIEF
- ▁FOREST
- ▁BELL
- ▁EXPERIENCE
- ▁STRUCK
- ▁CARRY
- ORY
- ▁WARM
- 'NO'
- ▁WORTH
- ▁SAYING
- ▁SILENCE
- ▁CROSS
- ▁JE
- ▁H
- ▁BEAUTY
- PH
- ▁DEAL
- KE
- ▁SECRET
- DY
- ▁MILES
- ▁LU
- ▁DOING
- ▁BOYS
- ▁CROWD
- ▁ACCOUNT
- REW
- ISM
- TI
- ▁FE
- ▁NONE
- ▁RO
- ▁NEARLY
- ▁CHA
- ▁YOUTH
- ▁CAP
- HA
- ▁BIT
- ▁LIE
- ▁ATTENTION
- ▁STANDING
- ▁STAR
- ▁RESPECT
- ▁FURTHER
- ATIONS
- ▁ROCK
- ▁BOW
- EM
- ▁EARLY
- ▁MOUTH
- ▁BOAT
- UB
- ▁IMMEDIATELY
- ▁EXCEPT
- SHIP
- ▁PICTURE
- ▁BRIGHT
- ▁WA
- ▁GREW
- ▁LEAD
- ▁CUR
- ▁TONE
- RRY
- RS
- ▁WIDE
- CHE
- ▁FORTH
- IG
- OS
- ▁NEITHER
- ▁YOURSELF
- ▁SMILE
- ▁DRESS
- ▁OPINION
- ▁HAPPENED
- ▁WAIT
- ▁SIT
- ▁SHIP
- ▁AH
- ▁DESIRE
- ▁THICK
- ▁THIRD
- ▁GRAND
- ▁FOLLOW
- ▁GATHER
- ▁HILL
- ALLY
- ▁COMPANY
- ▁CHAIR
- DER
- ▁TOP
- ▁PAR
- ▁LENGTH
- ▁THIRTY
- ▁MINE
- ▁MI
- ▁EAT
- ▁EQUAL
- ▁AFRAID
- ▁FRESH
- ▁TAIL
- ▁FILLED
- ▁SU
- ▁MINUTES
- ▁FAST
- BU
- ▁ENTER
- ▁QUEEN
- ▁UTTER
- AG
- ▁FLOOR
- ▁SHA
- DI
- ▁HEAVEN
- ▁STOPPED
- ▁GUARD
- ▁HALL
- ▁BAR
- ▁COMPLETE
- ▁NINE
- ▁WEEK
- ▁GOLD
- VA
- ▁FIFTY
- ▁BEAT
- ▁PRESS
- ▁ATTEMPT
- ▁EXCLAIMED
- DO
- ▁CONF
- ▁SEEMS
- ▁STARTED
- ▁EL
- ▁HAR
- ▁EXPRESSION
- ▁TRA
- ▁WONDERFUL
- ▁SAINT
- ▁APPEARANCE
- ▁GRAVE
- ▁OFFICE
- ▁INSTEAD
- ▁SILENT
- ▁SOUTH
- ▁AGO
- ▁CAMP
- ▁LOVED
- ▁PATH
- ▁LEARN
- ▁PLAN
- ▁GOVERNMENT
- OUR
- PPED
- ▁SITTING
- ▁SEAT
- TEN
- RESS
- SIDE
- ▁MOVED
- ▁DIE
- ▁RESULT
- ▁SPRING
- ▁PLEASE
- ▁RI
- ▁NATURAL
- ▁ANNE
- ▁STA
- ▁CORNER
- ▁WALL
- ▁IMPOSSIBLE
- ▁BROWN
- ▁SUIT
- ▁MUSIC
- PI
- ▁TRY
- ▁DIED
- ▁TEARS
- ▁JU
- ▁COMFORT
- ▁DANGER
- ▁MEASURE
- ▁PROPERTY
- ▁BORN
- CON
- ▁CR
- ▁BROKEN
- ▁MASS
- EVER
- IER
- ▁EXPRESS
- ▁POCKET
- ▁SCARCE
- ▁SELF
- NY
- ▁MADAME
- ▁LAUGHED
- ▁TOUCH
- ▁APPEAR
- ▁LONDON
- ▁SAFE
- ▁SHARP
- ▁ATTACK
- ▁JANE
- ▁COVERED
- ▁OUTSIDE
- ▁WHATEVER
- ▁PLACED
- ▁RACE
- ▁SHORE
- ▁LAID
- ▁ROMAN
- ▁PERSONAL
- UP
- AU
- ▁REMAINED
- ▁HAPPINESS
- ▁AFTERNOON
- ▁DISTANCE
- ▁STORM
- ▁MARRIED
- ▁FRANK
- ▁VALLEY
- ▁BOUND
- ▁TALKING
- ▁JO
- ▁QUICK
- ▁STEP
- AND
- ▁ARMY
- ▁EFFORT
- ▁FRENCH
- ▁V
- LEY
- ▁PARTICULAR
- ▁START
- ATING
- OO
- LU
- ▁TRANS
- ▁HAPPEN
- ▁HABIT
- ▁VILLAGE
- ▁BELOW
- ▁GENTLEMAN
- BLE
- ▁BILL
- ▁SAVE
- ACT
- ▁SOCIETY
- ▁MAJOR
- ▁QUARTER
- ▁SKY
- ▁GUESS
- CY
- ▁SAD
- ILE
- ▁SL
- ▁PLEASANT
- ▁STRAIGHT
- ▁STRENGTH
- ▁FORTUNE
- ▁WRONG
- ▁COMMAND
- ▁BOX
- ▁QUIET
- ISE
- ▁JA
- IBLE
- ▁TREAT
- ▁GLANCE
- ▁NECESSARY
- ▁FORGET
- ▁MOUNTAIN
- ▁WINTER
- ▁DREW
- ▁WAV
- ▁PLAIN
- ▁ENTIRELY
- ▁TEA
- ▁SOFT
- ▁QUICKLY
- ▁INFLUENCE
- ▁DINNER
- ▁FOOD
- ▁CHAPTER
- ▁YE
- ▁REACH
- ▁GETT
- ▁PAPER
- ▁GIVING
- ▁BEGINNING
- ▁SEND
- ▁FIGHT
- ▁SCENE
- ▁RUSH
- ▁PI
- ▁MARK
- ▁NA
- ▁BROKE
- ▁CLASS
- ▁BATTLE
- ▁EASY
- ▁GROUP
- BY
- ▁STOP
- ▁DIRECTION
- ▁BESIDE
- ▁MOR
- HAM
- UFF
- ▁WEST
- ▁OBLIG
- ▁COLOR
- ▁SINGLE
- ▁EASILY
- ▁PALE
- ▁ACTION
- ▁INTER
- ▁STRANGER
- ▁WI
- ▁CONVERSATION
- ▁BLOW
- ▁MARY
- ▁MU
- ▁TERRIBLE
- ▁THINKING
- ▁PULL
- ▁MOON
- AB
- ▁REP
- ▁ESPECIALLY
- ▁HEAVY
- ▁SICK
- ▁LUCK
- ▁TRAIN
- ▁GUN
- ▁GU
- ▁WAITING
- ▁TURNING
- ITIES
- ▁BREAD
- ▁BELONG
- ▁LOUD
- ▁REPORT
- ▁AMERICAN
- ▁JOURNEY
- ▁ANXIOUS
- ▁LIPS
- ▁KILLED
- IGHT
- GO
- ▁CONSIDER
- ▁PROBABLY
- ▁PALACE
- ▁HISTORY
- ▁LAKE
- ▁SHUT
- ▁SIMPLY
- WA
- ▁PAIN
- ▁HORSES
- ▁SEEING
- FULLY
- ▁EXPECTED
- ▁EVIL
- ▁BURN
- ▁SIMPLE
- ▁DIRECT
- IFIED
- HER
- ▁SLOWLY
- ▁LEG
- UGH
- ▁SAIL
- RIC
- ▁WISHED
- ▁RULE
- ▁LAD
- ▁MORAL
- ▁MOVE
- ▁FOLLOWING
- ▁SILVER
- ▁SEARCH
- ▁CHANGED
- ▁HANDSOME
- ▁COULDN
- ▁PASSION
- ▁HU
- ▁SMILED
- ▁STREAM
- ▁CONCERN
- ▁PRESENCE
- STER
- ▁CONTENT
- ▁BOARD
- ▁SHAPE
- ▁DECIDED
- ▁MARRY
- ▁PERFECT
- ▁STEPS
- ▁CLOSED
- ABLY
- DEN
- ▁WEAK
- ▁SUFFICIENT
- ▁SHADOW
- ▁EXPECT
- ▁SPOT
- ▁DUTY
- ▁SPEAKING
- ▁BESIDES
- ▁FIELD
- ▁ROLL
- ▁TRYING
- ▁EAR
- ▁VER
- ▁MARRIAGE
- ▁SHOT
- ▁SLAVE
- ▁MILL
- ▁NATION
- ▁NECK
- ▁ARRIVED
- ▁TALL
- ▁GRACE
- LIN
- ▁FORTY
- ▁BROAD
- ▁SUMMER
- ▁COUSIN
- ▁BEGIN
- ▁CATCH
- ▁FO
- ▁PE
- ▁MEANT
- ▁THIN
- IO
- ▁GROW
- ▁TRO
- ▁NOTICE
- ▁CRY
- ▁FISH
- ▁COM
- ▁DEGREE
- ▁HONOUR
- ▁UNDERSTOOD
- ▁SHOP
- ▁TRUST
- ▁CONDITION
- ▁FARM
- IZ
- ▁SUDDEN
- ▁SUCCESS
- ▁SURPRISE
- ORS
- ▁THOUGHTS
- UND
- ▁ALLOWED
- ITE
- ▁NARROW
- ▁GLASS
- ▁SERIOUS
- ▁STICK
- ▁GAME
- ▁SPENT
- ▁SELL
- ▁GRA
- ▁LOWER
- ▁RAISED
- ▁PIN
- ▁ALLOW
- ▁CALM
- FT
- ▁L
- ▁PU
- ▁FIT
- ACH
- ▁SUFFER
- ▁LEGS
- ▁SUPPORT
- ▁FRANCE
- ▁LATTER
- OV
- ▁TASTE
- ▁GATE
- ▁INSTANT
- ▁MINUTE
- ▁OFFER
- ▁GREATER
- ▁PORT
- ILL
- ▁INDIVIDUAL
- ▁AUNT
- ▁EAST
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- ▁TWELVE
- ▁HONOR
- ▁MOVEMENT
- ▁ISLAND
- ACK
- ▁WOODS
- NCH
- ▁PLEASED
- ▁ENEMY
- ▁RAIN
- ▁VARIOUS
- ▁OBSERVED
- ▁LADIES
- ▁BELIEVED
- ▁CAST
- ▁RISE
- ▁BALL
- ▁MONTHS
- ICE
- ▁MURDER
- ▁CONDUCT
- ▁SOCIAL
- ▁TENDER
- ▁LEARNED
- ▁FRA
- ▁FIRM
- CLOCK
- ▁PREVENT
- ▁RING
- LIE
- ▁GOLDEN
- ▁DECLARED
- ▁BUILDING
- ▁WRITE
- ▁ATTEND
- ▁CARRIAGE
- ▁SITUATION
- IDE
- ▁NOBLE
- ▁HUNG
- ▁RUNN
- ▁YELLOW
- ▁KNOWLEDGE
- ▁YORK
- ▁PUSH
- ▁LEAVING
- ▁POST
- ▁CIRCUMSTANCES
- ▁SEEK
- ▁FINALLY
- ▁MAIN
- ▁LETTERS
- ▁POL
- ▁ADD
- FE
- ▁ANCIENT
- ▁MARCH
- ▁WINE
- ▁STATES
- ▁WALLS
- ▁PRISONER
- ▁ISABEL
- ▁TEMPER
- ▁JUDGE
- ▁FAINT
- ▁POND
- ▁GRASS
- ▁FAM
- OUT
- ▁LAUGH
- ▁GRAY
- IGN
- ▁ESCAPE
- ▁KILL
- ▁PRAY
- ▁COMES
- ▁ABSOLUTE
- ▁BLIND
- ▁WIN
- ▁HOST
- ▁MERELY
- ▁RID
- ▁EVERYBODY
- ▁MATERIAL
- ▁STRETCH
- ▁DUE
- ▁ROW
- ▁TIN
- ▁PROMISE
- ▁LISTEN
- ▁WALKING
- ▁COMPANION
- ▁INDIAN
- ▁BREAK
- ▁BENEATH
- ▁RUIN
- ▁EDGE
- ▁WOR
- ▁FORMER
- ▁WORSE
- ▁EVIDENTLY
- ▁HARM
- ▁CENT
- ▁PIECE
- ▁LOT
- ▁PRESIDENT
- ▁SPECIAL
- ▁LABOR
- ▁HEALTH
- GA
- ▁PLACES
- ▁BEN
- ▁SOMEWHAT
- ▁DROPPED
- ▁AFFECTION
- ▁EXACTLY
- ▁DARKNESS
- ▁FALLEN
- ▁DRESSED
- ▁BILLY
- ▁ACCEPT
- ▁FL
- ▁HOT
- ▁REPEATED
- ▁MEETING
- PA
- ▁PERIOD
- ▁HONEST
- ▁INSTANCE
- ▁FLA
- ▁PASSAGE
- ▁NE
- ▁POSSESSION
- ▁WEAR
- ▁PEACE
- ▁COAT
- ▁HOUSES
- ▁MOUNTAINS
- ▁FIFTEEN
- ▁WELCOME
- ▁YARD
- ▁PROPER
- ▁MUS
- ADE
- ▁RECEIVE
- ▁SKIN
- ▁GROWN
- ▁AFTERWARDS
- ANG
- ▁DA
- ▁DIFFICULT
- ▁PERSONS
- ▁ACCORDING
- ▁FARMER
- ▁SPEECH
- ▁IMPORTANT
- PAR
- ▁PERFECTLY
- ▁MIN
- ▁CONSIDERED
- ▁NU
- ▁DEPEND
- ▁MORROW
- ▁MOUNT
- ▁KISS
- ▁LYING
- ▁SUFFERING
- ▁EXIST
- ERY
- OOK
- BA
- ▁PAINT
- AH
- ▁CAT
- ▁PURE
- ▁WISE
- ▁PRIVATE
- ▁REBECCA
- ▁VESSEL
- ▁CLEAN
- ▁GENTLEMEN
- ▁IRON
- ▁STORE
- ▁FUR
- ▁INDIANS
- ▁LOSE
- ▁BATH
- ▁NEWS
- ▁CHI
- ▁FA
- ▁CHARGE
- ▁PRIEST
- ▁WRITTEN
- ▁FORGOTTEN
- ▁TRAIL
- ▁CLOTHES
- ▁ALIVE
- ▁SUB
- ▁REPLY
- ▁THROW
- ▁AB
- ▁SOLDIERS
- ▁ISN
- ▁COTTAGE
- ▁COURAGE
- ▁CONTAIN
- ▁BUILT
- ▁PAID
- ▁HUNT
- ▁CASTLE
- HOOK
- ▁MERE
- GGED
- ▁NI
- ▁UNC
- ▁PREPARED
- ▁BARE
- ▁SMILING
- ▁SPREAD
- ▁WEATHER
- ▁EDWARD
- ▁GERMAN
- ▁CURIOUS
- ▁SERVANT
- ▁DISCOVERED
- ▁TRAVEL
- EY
- ▁DANCE
- ▁PEN
- BR
- GEN
- ▁BREAKFAST
- ▁CHAMBER
- ▁WILLIAM
- ▁TERROR
- ▁SPITE
- ▁TIRED
- ▁LOCK
- ▁CONSIDERABLE
- TLE
- ▁MANAG
- ▁DRY
- ▁FINISHED
- ▁MILLION
- ▁FRE
- ▁MIS
- ▁PASSING
- ▁DRAW
- ▁BON
- ▁VA
- ▁VEN
- ▁MAKES
- ▁VAIN
- ▁BOTTOM
- ▁DRINK
- ▁FUTURE
- ▁RACHEL
- ▁SORROW
- ▁SIXTEEN
- ▁KNIT
- ▁PROUD
- WI
- ▁TOBY
- ▁NOISE
- ▁SLIGHT
- ▁PROCEED
- ▁FER
- ▁COVER
- ▁DRAWING
- ▁FAVOR
- ▁CATHERINE
- ▁NEWSPAPER
- ▁NOBODY
- ▁ROOF
- ▁WEALTH
- ▁PROVE
- ▁DRAWN
- TTED
- OKE
- ▁DETERMINED
- ▁DOG
- ▁REMEMBERED
- ▁OPENING
- ▁FLOWERS
- ▁GENTLE
- ▁KNIGHT
- ▁RECOVER
- ▁DESERT
- ▁MOTION
- ▁NICE
- ▁INTENTION
- ▁GROWING
- ▁CLOUD
- ▁MONTH
- HOOD
- ▁POT
- UDE
- ▁PLANT
- ▁MAD
- ▁ENJOY
- ▁FAT
- ▁COR
- ▁KNOWING
- ▁IDEAS
- IZED
- ▁CHEEK
- ▁EUROPE
- ▁KNOCK
- ▁ALARM
- ▁TONGUE
- ▁SPACE
- ▁PATSY
- ▁MISTRESS
- ▁HENRY
- ▁JERRY
- ▁LIKED
- ▁PLAYED
- ▁BOOKS
- ▁MODER
- ▁CORN
- ▁ELIZABETH
- ▁CLUB
- ▁BRAIN
- ▁TROOP
- ▁COOK
- ▁DU
- ▁FUN
- DAY
- ▁QUA
- ▁FLOW
- ▁DARE
- ▁DELIGHT
- ▁WOUND
- ▁DESCEND
- ▁EVERYWHERE
- ▁FRIGHTENED
- ▁GEORGE
- ▁PECULIAR
- ▁MACHINE
- ▁PATIENT
- ▁MEADOW
- ▁PEASANT
- ▁BURST
- ▁ORDINAR
- ▁SONG
- ▁BRAVE
- ▁EXISTENCE
- ▁LUCY
- ▁J
- ▁CAREFULLY
- ▁PRESENTLY
- ▁GEN
- ▁COW
- LLY
- ▁PROMISED
- UOUS
- ▁LIFTED
- ▁MEANING
- ALL
- ▁FAIL
- NER
- ▁REGULAR
- ▁VIRTUE
- ▁STUDY
- ▁PROTECT
- ▁FOND
- ▁FANCY
- ▁STOCK
- ▁KEY
- ▁JUSTICE
- ▁PACK
- LET
- ▁AFFAIRS
- ▁DIFFICULTY
- ▁WORE
- ▁COST
- ▁HEAT
- ▁SHOULDER
- ▁OFFERED
- ▁MISTAKE
- ▁DOLLARS
- ▁LOOKS
- QUA
- ▁BREAST
- ▁PRINCIPLE
- ▁CHARLES
- ▁TEETH
- ▁OCCUPIED
- ▁DROP
- ▁PAPA
- ▁SHEEP
- ▁KNOWS
- ▁DECK
- ▁BORE
- ▁EXC
- ▁SURPRISED
- ▁STATION
- ▁PL
- ▁PR
- ▁OURSELVES
- ▁SYMPATHY
- ▁RUTH
- ▁EXCITED
- ▁CONTROL
- ▁ANGRY
- ▁IMAGINATION
- ▁WITNESS
- ▁HOLDING
- THER
- DA
- ▁TRADE
- ▁CREATURE
- ▁SISTERS
- ▁JOIN
- LAS
- ▁ALTOGETHER
- ▁CIVIL
- ▁EMPTY
- ▁LEAP
- ▁HURT
- ▁BOLD
- ▁TASK
- ▁POLICE
- ▁DRAGON
- ▁MAID
- ▁CLAIM
- ▁SHAME
- ▁PHYSICAL
- ▁CONC
- ▁SEIZED
- ▁OB
- ▁LIVES
- ▁HEIGHT
- ▁GI
- ▁PAL
- ▁CHARMING
- ▁FEELINGS
- ▁SERVANTS
- ▁DELIVER
- ▁FRUIT
- ▁SATISFIED
- ▁STRUGGLE
- ▁WROTE
- ▁CONCEAL
- ▁MOVING
- ▁FLASH
- ▁OPPOSITE
- ▁HURRY
- ▁ROUGH
- ▁PRICE
- ▁AWFUL
- ▁SAND
- ▁SLIPP
- ▁SHOWN
- ▁SPRA
- ▁AGREED
- ▁FIXED
- ▁PERCEIVED
- ▁UPPER
- ▁FINGER
- ▁FINGERS
- ▁EAGER
- LF
- ▁EARS
- LIGHT
- ▁IMAGINE
- ▁LIKELY
- ▁COAST
- ▁UNITED
- ▁VAN
- ▁EXPLAINED
- ▁TELLING
- ▁DANGEROUS
- ▁DICK
- ▁COOL
- ▁CAL
- ▁INSIST
- BI
- ▁SECURE
- ▁HILLS
- ▁SAN
- ▁CHEER
- ▁FILL
- ▁BUY
- ZA
- HI
- ▁CLOTH
- ▁POSSESSED
- ▁ADVANCE
- ▁METHOD
- ATIVE
- ▁GREATLY
- ▁SMOKE
- ▁HIGHER
- ▁COMPANIONS
- ▁ANIMALS
- ▁GALL
- ▁QUIETLY
- ▁TRAVELL
- ▁RESOLVED
- ▁FLEW
- ▁CARLYLE
- ▁MEMORY
- ▁RESIST
- ▁GRAHAM
- ▁LAUGHING
- ▁FAITH
- ▁BIRD
- CRI
- ▁LEAVES
- ▁AMERICA
- ▁DEMAND
- BOARD
- ▁AWAKE
- ▁CURIOSITY
- ▁LANGUAGE
- ▁VIOLENT
- ▁AWARE
- ▁DOUBLE
- ▁LOOSE
- LIKE
- ▁ADAM
- ▁RISING
- ▁HOTEL
- ▁BAND
- ▁ENGAGED
- ▁HEADS
- ▁LOG
- ▁FORMED
- ▁WINDOWS
- ▁PREFER
- RUS
- ▁THROWN
- ▁ARCH
- ▁PAUSE
- ▁SERVE
- KIN
- ▁FALLING
- ▁VO
- ▁WHISPERED
- ▁POWERFUL
- ▁ER
- ▁DEPART
- ▁CRUEL
- ▁EXAMPLE
- ▁SMOOTH
- ▁INTRODUC
- ▁RELIGION
- ▁SEVENTEEN
- ▁ABSENCE
- ▁PRINT
- ▁SHINING
- ▁ICE
- ▁POET
- ▁DREADFUL
- ▁REQUIRED
- ▁ORIGINAL
- ▁POINTED
- ▁INSIDE
- ▁BROTHERS
- ▁PRODUCED
- ▁SPOKEN
- ▁CREATURES
- ▁FLY
- ▁TOM
- ▁PURSU
- ▁SYSTEM
- ▁EXCELLENT
- ▁EXCITEMENT
- ▁MIDDLE
- ▁FALSE
- ▁REGRET
- ▁RAY
- ▁PHYSICIAN
- ▁COP
- ▁VALUE
- ▁TOUCHED
- ▁FLAT
- ▁OAK
- ▁SUM
- ▁LOSS
- ▁PAPERS
- ▁STEPP
- ▁REVER
- ▁SHADE
- SOME
- ▁LISTENED
- ▁N
- ▁DISCOVER
- ▁BITTER
- TERN
- ▁HOLE
- ▁ADVANCED
- ▁PICK
- ARTAGNAN
- ▁CORPORAL
- ▁ASLEEP
- ▁TEMPLE
- ▁INDICAT
- IUM
- ▁FARTHER
- ▁EXCUSE
- ▁FLU
- ▁NOSE
- ▁SIXTY
- ▁SUPPOSED
- ▁PROVED
- ▁RATE
- ▁SHOULDERS
- ▁AFFAIR
- ▁FIELDS
- ▁REMARKED
- AVE
- ▁WEEKS
- ▁ESTABLISH
- ▁PARIS
- ▁ADMIT
- ▁NEIGHBOR
- ▁ATTRACT
- ▁CUSTOM
- ▁DISTINGUISH
- ▁SURFACE
- ▁COUPLE
- ▁DEVIL
- ▁LIMIT
- ▁ROYAL
- ▁FOOL
- ▁RARE
- ▁PRIDE
- ▁PROFESSOR
- ▁SAKE
- ▁DALE
- ▁VAST
- ▁REFUSED
- ▁FAILED
- ▁BAG
- ▁ROB
- ▁WASH
- ▁FAIRY
- ▁FREQUENT
- ▁MARILLA
- ▁PROGRESS
- ▁RELIEF
- ▁DROVE
- ▁DOZEN
- ▁AHEAD
- ▁ADVENTURE
- ▁GRANT
- ▁PRIM
- ▁MENTAL
- ▁PAIR
- ▁IMPRESSION
- ▁WOUNDED
- ▁FULLY
- ▁DISAPPEARED
- ▁MILE
- ▁DRIVE
- ▁MUD
- ▁SIZE
- ▁ANIMAL
- ZE
- ▁GRE
- ▁REPRESENT
- ▁ACQUAINTANCE
- ▁INSTRUMENT
- ▁SPLENDID
- ▁UNKNOWN
- ▁CORONEL
- ▁EMPEROR
- ▁EARNEST
- ▁EXTEND
- ▁BRIEF
- ▁RENDER
- ▁PARENTS
- ▁GENTLY
- ▁CALLING
- ▁TRIBE
- ▁CHRISTIAN
- ▁INTERESTING
- ▁LAMP
- ▁JIMM
- ▁DIV
- ▁LOVER
- UCH
- ▁HID
- ▁NEEDED
- ▁ORDERED
- ▁MEAL
- ▁SLOW
- ▁DAM
- ▁CLOUDS
- ▁DAN
- ▁GAR
- ▁EXPLAIN
- ▁QUI
- ▁CLIMB
- ▁HURRIED
- ▁MURMUR
- ▁SWIFT
- ▁ARTHUR
- ▁JEFF
- ▁KINGDOM
- ▁MESSAGE
- ▁PROTEST
- ▁ORGAN
- ▁RISK
- ▁FORGIVE
- ▁OCCURRED
- ▁PEARL
- ▁ODD
- ▁INFORMATION
- ▁BUSY
- ▁TRI
- ▁LACK
- ▁BAY
- ▁FLEET
- ▁CROWN
- ▁WAITED
- ▁BIRDS
- ▁PITY
- ▁SUCCEEDED
- ▁INFORMED
- ▁WISHES
- ▁DIRECTLY
- ▁CABIN
- ▁AUGUST
- ▁COUNTENANCE
- ▁HORROR
- ▁PHILIP
- ▁POPULAR
- ▁PREVIOUS
- ▁CONTRARY
- ▁ARTICLE
- ▁DIFFERENCE
- ▁HIDDEN
- ▁HUGE
- ▁AUTHORITY
- ▁POUND
- ▁JUMP
- ▁SPI
- ▁SHAKE
- ▁EVENTS
- ▁FRO
- ▁LEAN
- ▁CRO
- ▁TRIM
- ▁SHARE
- ▁FISHER
- ▁SETTLED
- ▁QUESTIONS
- ▁SI
- ▁VAL
- ▁APPROACHED
- ▁SUGGESTED
- ▁CONTINU
- ▁PERFORM
- ▁ACKNOWLEDG
- ▁CLIFF
- ▁COLONEL
- ▁GHOST
- ▁MAJESTY
- ▁EMOTION
- ▁SUPPER
- ▁DISTANT
- ▁INTERESTED
- ▁JACK
- ▁HUM
- ▁TRAMP
- ▁BRI
- ▁POUR
- ▁SHIPS
- ▁CHAIN
- ▁DY
- ▁RANK
- ▁MATTERS
- ▁LOVELY
- AW
- ▁PAT
- ▁WORKING
- ▁CONSEIL
- ▁EVIDENCE
- ▁MERCHANT
- ▁SOLEMN
- ▁CONSTANT
- ▁MINISTER
- ▁OFFICIAL
- ▁SENTIMENT
- ▁CENTURY
- ▁DELAY
- ▁JAMES
- ▁MATCH
- ▁FOREIGN
- ▁AROSE
- ▁BEAST
- ▁BAB
- ▁WIT
- ▁REMARKABLE
- ▁THOR
- ▁COMPAR
- ▁MAL
- ▁NEARER
- ▁FOURTH
- ▁GREY
- ▁MENTION
- ▁RUBB
- ▁CHARM
- ▁BARON
- ▁DESIRED
- SCAR
- ▁HOPED
- ▁TEACHER
- ▁MON
- ITCH
- BEL
- ▁PARTS
- ▁EIGHTY
- LAC
- GGING
- ▁REFLECT
- ▁COLLECT
- ▁BULL
- ▁CONSCIOUS
- ▁MOMENTS
- ▁DISTURB
- ▁COLLEGE
- ▁EGGS
- ▁STUPID
- ▁YESTERDAY
- ▁EXAMINE
- ▁FAULT
- ▁DEPTH
- ▁ROOT
- ▁MOUSE
- ▁SOUGHT
- ▁TURTLE
- ▁NATIVE
- ▁CRACK
- ▁SOLD
- ▁INVIT
- ▁PICKED
- ▁CEASED
- ▁HEARING
- ▁MIDS
- ▁PLAYING
- ▁STAGE
- ▁UNTO
- ▁GAIN
- ▁MIST
- ▁ORDERS
- ▁KNEES
- ▁TALE
- ▁DISTINCT
- ▁BENT
- ▁DESPAIR
- ▁TRIUMPH
- ▁SQUARE
- ▁THROAT
- ▁BOUGHT
- ▁PERMIT
- ▁SPEND
- ▁TRIP
- ▁THREATEN
- ▁ROME
- INESS
- ▁EXPOS
- GON
- ▁WRITING
- ▁INCREASED
- ▁PORTION
- ▁TENT
- IUS
- ▁YO
- ▁INTENDED
- ▁NAMED
- RATION
- ▁NOTIC
- ▁PIPE
- ▁WILLING
- ▁INSTANTLY
- ▁SERVED
- ▁BAL
- ▁POSSESS
- ▁CRE
- ▁ADMIRATION
- ▁LIBERTY
- ▁OPPORTUNITY
- ▁SELDOM
- ▁BIRTH
- ▁GLOW
- ▁INCLUD
- ▁REQUEST
- ▁TYPE
- ▁SLEPT
- ▁CRIME
- ▁MOTIVE
- ▁ELSIE
- ▁BEGUN
- ▁CONSENT
- ▁ADMITTED
- ▁AVOID
- ▁ADDRESS
- ▁HATE
- ▁DEMANDED
- ▁APPARENTLY
- ▁SUGGESTION
- ▁CONSIDERATION
- ▁BLESS
- ▁PROCEEDED
- NCY
- ▁PRISON
- ▁CONT
- ▁SHOUTED
- ▁FACES
- ▁SPIRITS
- ▁DEVELOP
- ▁ACCIDENT
- ▁ADVICE
- ▁INNOCENT
- ▁INSTINCT
- ▁UNCONSCIOUS
- ▁MYSTERIOUS
- ▁PRETEND
- ▁PEEP
- ▁ANYONE
- ▁DUKE
- ▁PLUM
- VILLE
- ▁SEVERE
- ▁ALAS
- ▁DELIGHTED
- ▁ISSUE
- ▁ASKING
- ▁CROW
- ▁ACCEPTED
- ▁RIDE
- ▁DOORS
- ▁TAR
- ▁PREPAR
- ▁SUGGEST
- WOOD
- ▁CITIZEN
- ▁ENTRANCE
- ▁LINCOLN
- ▁POLITICAL
- ▁PRACTICAL
- ▁STIFF
- ▁WIDOW
- ▁CAPITAL
- ▁CLEVER
- ▁MAMMA
- ▁CREDIT
- ▁OBEY
- ▁STRING
- ▁DAILY
- ▁ARGUMENT
- ▁HEAP
- ▁APARTMENT
- ▁FLIGHT
- ▁ELDER
- ▁PUR
- ▁PAGE
- ▁DUST
- ▁GAZE
- ▁NATIONAL
- ▁BABY
- DDING
- ISTS
- ▁TEACH
- ▁STREETS
- CAL
- ▁GE
- AFF
- ▁GOES
- ▁POSSIBL
- UNG
- ▁LINES
- GUE
- ▁VOTE
- ▁HUNTING
- ▁QUO
- ▁RESEMBL
- ▁BASKET
- ▁CIRCLE
- ▁CONSEQUENCE
- ▁KITCHEN
- ▁TREASURE
- ▁NEVERTHELESS
- ▁FANCI
- ▁ASSEMBL
- ▁GRIEF
- ▁VEIL
- ▁SEASON
- ▁INVENT
- ▁VIRGINIA
- ▁HUT
- ▁GUEST
- ▁ROAR
- ▁BEHOLD
- ▁VICTORY
- ▁CAPABLE
- ▁DULL
- ▁SHOE
- ▁FLOAT
- ▁MERRY
- ▁IMMEDIATE
- ETH
- ▁ELEANOR
- ▁EXPLANATION
- ▁PARLIAMENT
- ▁PRINCIPAL
- ▁PROPORTION
- ▁RESOLUTION
- ▁UNUSUAL
- ▁BLUFF
- ▁NINETEEN
- ▁SENSATION
- ▁VISIBLE
- ▁INCOME
- ▁FATE
- ▁SUPER
- ▁LAUGHTER
- ▁EASE
- ▁LOAD
- ▁JEW
- ▁ZE
- ▁FEVER
- ▁WEDDING
- ▁JOINED
- ▁TRACE
- ▁LEADER
- ▁CLEARLY
- ▁FLOWER
- ▁TERMS
- ▁EMPLOYED
- OCK
- ▁PARTICULARLY
- ▁MEMBERS
- ▁CONFESS
- ▁GRO
- ▁ADDRESSED
- ▁CHRIST
- ▁ACCOMPANI
- ▁AFFORD
- ▁AMOUNT
- ▁BRILLIANT
- ▁COMMUNICAT
- ▁FIERCE
- ▁RECORD
- ▁SACRIFICE
- ▁TEMPT
- ▁CORDIAL
- ▁COLOUR
- ▁PROOF
- ▁ESTATE
- ▁PARDON
- ▁ADVIS
- ▁ATTITUDE
- ▁IMPORTANCE
- ▁BOOT
- ▁SHOCK
- ▁FIR
- ▁PLENT
- ▁HIT
- ▁MEMBER
- ▁SUR
- ▁SEATED
- ▁MAG
- AVING
- ▁FAVOUR
- ▁REMARK
- ▁DIM
- ▁FAITHFUL
- ▁SAVED
- CHI
- ▁SIN
- THE
- ▁CONFIDENCE
- ▁EXTRAORDINARY
- ▁FORTUNATE
- ▁MISFORTUNE
- ▁PATIENCE
- ▁RELIGIOUS
- ▁SATISFACTION
- ▁POSITIVE
- ▁SIMILAR
- ▁EXCHANG
- ▁RETREAT
- ▁FLESH
- ▁ADMIRE
- ▁SPIRITUAL
- ▁DAWN
- ▁BURIED
- ▁URGE
- ▁SUNDAY
- ▁FOX
- ▁EMMA
- ▁NURSE
- ▁SNAPP
- ▁PARK
- ▁OBTAIN
- ▁RECOGNIZED
- ▁SPEED
- ▁MAGIC
- ▁LAWS
- ▁REMOVED
- ▁HAM
- ▁PRESERV
- ▁AID
- HOUSE
- ▁MENTIONED
- ▁CONSCIENCE
- ▁CONTEMPT
- ▁DETAIL
- ▁IMMENSE
- ▁NERVOUS
- ▁PRISCILLA
- ▁UNFORTUNATE
- ▁UNHAPPY
- ▁COMPLAIN
- ▁TWICE
- ▁WHISTL
- ▁SNAKE
- ▁WASHINGTON
- ▁PIRATE
- ▁WICKED
- ▁BODIES
- ▁DESIGN
- ▁JASON
- ▁VAGUE
- ▁CONSIST
- ▁GIFT
- ▁ANGEL
- ▁RODE
- ▁FOLD
- ▁BRIDE
- ▁ANGER
- ▁BASE
- ITUDE
- ▁CONCLUDED
- ▁ALTER
- ▁FRI
- ▁PANT
- ▁BID
- ▁HIGHEST
- ▁SAILOR
- MPLE
- ▁OBSERV
- ▁CHEERFUL
- IFICATION
- RID
- ▁DESCRIBED
- ▁BIN
- ▁JEWEL
- ▁ARTIST
- ▁PEER
- ▁NORA
- ▁SKI
- ▁DIAMOND
- ▁ENCOURAGE
- ▁PRIVILEGE
- ▁PROJECT
- ▁ANYBODY
- ▁ENCOUNTER
- ▁HOLLOW
- ▁YIELD
- ▁BOBBY
- ▁SAVAGE
- ▁SOMEBODY
- ▁OTHERWISE
- ▁PRAISE
- ▁PROBLEM
- ▁DISTRESS
- ▁UGLY
- ▁WARRIOR
- ▁MOURN
- ▁RELIEV
- ▁DESK
- ▁FOOLISH
- ▁STARTLED
- ▁SKILL
- SHONE
- ▁LONE
- ▁OBSERVATION
- ▁DENI
- ▁NEST
- ▁SOLDIER
- ▁RELATION
- ▁TRULY
- ▁VISITOR
- ▁OFFICERS
- ERSON
- ▁YA
- ▁EVIDENT
- ▁DREAMS
- ▁KEEPING
- ▁PLAINLY
- ▁DRUNK
- ▁EMBRAC
- ▁INTELLIGENCE
- ▁LIEUTENANT
- ▁PERSUADE
- ▁SURROUNDING
- ▁UNIVERSAL
- ▁GLEAM
- ▁SUPERIOR
- ▁WHEEL
- ▁JEALOUS
- ▁QUEER
- ▁PIERRE
- ▁MILK
- ▁RAIL
- ▁FLUSH
- ▁STAIRS
- ▁JESUS
- ▁HORN
- ▁REGION
- ▁SAFETY
- ▁KA
- ▁GUIDE
- ▁CAKE
- ▁CUP
- ▁INQUIRED
- ▁DEFI
- ▁LESSON
- ▁WRETCHED
- ▁PACE
- ▁TEST
- ▁READING
- ▁ENTIRE
- ▁NET
- ▁DOGS
- ▁COMMANDER
- ▁PRODUCE
- ▁GAINED
- ▁ARRIVAL
- ▁FAMILIAR
- ▁MEANWHILE
- ▁SUSPICION
- ▁CHOICE
- ▁IMPULSE
- ▁THRUST
- ▁PROCESS
- ▁SUMMON
- ▁SHEPHERD
- ▁HASTILY
- ▁GRASP
- ▁COUNTESS
- ▁STYLE
- ▁DWELL
- ▁MERIT
- ▁PITCH
- ▁HUNGRY
- ▁SPORT
- ▁LOUISE
- ▁STERN
- ▁PROVIDED
- ▁ASSUME
- ▁EARLIE
- ▁RAGE
- ▁U
- ▁RAPIDLY
- PORT
- ▁SUCCESSFUL
- ▁FLED
- ▁AGREE
- ▁CONDITIONS
- ▁RELATIONS
- ▁DREAD
- ▁NATURALLY
- ▁EARL
- ▁GAY
- ▁HYPNOTI
- ▁PUTT
- ▁GAZ
- ▁JIM
- ▁PAUS
- ▁PROPOS
- ▁ADMINISTRATION
- ▁ELEVEN
- ▁HOSPITAL
- ▁MAGISTRATE
- ▁STRIKE
- ▁DIGNITY
- ▁GLORY
- ▁BOTTLE
- ▁THRONE
- ▁RECKON
- ▁COSETTE
- ▁MOREOVER
- ▁APPLI
- ▁HIND
- ▁PRODUCT
- ▁POOL
- ▁TRIAL
- HAN
- ▁ERIC
- ▁CUB
- ▁PIECES
- ▁EXCEPTION
- ▁ENJOYED
- ▁DARED
- ▁TRU
- ▁CLOSELY
- ▁RAPID
- ▁AFFECTED
- ▁REQUIRE
- ▁SOFTLY
- ▁BROW
- UCK
- ▁MARKED
- ▁SEVENT
- ▁ELECT
- ▁FORGOT
- ▁CORRECT
- ▁FRANCS
- ▁MARGUERITE
- ▁SCIENCE
- ▁UNEXPECTED
- ▁FOUGHT
- ▁MILITA
- ▁THUNDER
- ▁VOYAGE
- ▁GANEM
- ▁FREEDOM
- ▁NODDED
- ▁CAPTURE
- ▁MORTAL
- ▁OWNER
- ▁POLITE
- ▁VISION
- ▁EDUCATION
- ▁GOVERNOR
- ▁RAV
- ▁REWARD
- ▁HASTE
- ▁REPEAT
- ▁DETERMIN
- ▁PITI
- ▁KNEE
- LINE
- ▁DEVOTED
- ▁INTERRUPTED
- ▁FOLKS
- ▁EXTREME
- ▁APPROACH
- ▁CONTINUE
- ▁BEARING
- ▁CHAP
- ▁ACQUAINTED
- ▁GLIMPSE
- ▁GRADUALLY
- ▁SUNSHINE
- ▁PRACTICE
- ▁SUPPLI
- ▁DAVID
- ▁DRIFT
- ▁SHOWING
- ▁LEVEL
- ▁PROMPT
- ▁QUARREL
- ▁REPRESENTATIVE
- ▁PLUNG
- ▁GIANT
- FALL
- ▁STOUT
- CHA
- WEPT
- ▁GLANC
- ▁SALT
- ▁CHOSEN
- ▁BUCK
- ▁REALIZED
- ▁REALITY
- ▁TUR
- ▁DRIVEN
- ▁CARD
- ▁PRAYER
- ▁TERM
- AID
- ▁HOLY
- ▁ENDURE
- ▁RANGE
- ▁HANG
- ▁SAM
- LAN
- ▁CAVE
- INA
- ▁GRI
- ▁SIGH
- ▁NEIGHBOUR
- ▁COUNCIL
- ▁EXERCISE
- ▁NAUTILUS
- ▁SOMEWHERE
- ▁SYLVIA
- ▁THOROUGH
- ▁VICTIM
- ▁BRIDGE
- ▁COMPELLED
- ▁INCLINED
- ▁OVERCOME
- ▁RESERVE
- ▁ARREST
- ▁PRECIOUS
- ▁DUTCH
- ▁OCEAN
- ▁ACQUIR
- ▁RECALL
- ▁DESTIN
- ▁ATTACH
- ▁SLIM
- ▁WEEP
- ▁CONSCIOUSNESS
- ▁TIGHT
- ▁WAKE
- ▁COMFORTABLE
- ▁ACTIVE
- ▁WINGS
- ▁GRIN
- ▁AFFECT
- ▁WHIT
- ▁IDEAL
- ▁EASTER
- ▁APPROACHING
- ▁CREATED
- ▁PLANS
- ▁INCREASE
- ▁FLYING
- ▁SHOUT
- OES
- MISSION
- ▁ARMED
- ABILITY
- ▁BLUSH
- ▁CONNECTION
- ▁MATTHEW
- ▁MEDICINE
- ▁REMIND
- ▁EXHIBIT
- ▁BLOCK
- ▁DESERVE
- ▁LISTENING
- ▁TITLE
- ▁FLOUR
- ▁FLAME
- ▁AGENT
- ▁USEFUL
- ▁BRIG
- ▁BOIL
- ▁ASSURED
- ▁REFLECTION
- ▁PINE
- ▁WAG
- ▁YOUNGER
- ▁BEARD
- ▁KINDNESS
- CTUALLY
- ▁ACTUAL
- ▁WEIGHT
- ▁LILY
- ▁IMPRESS
- ▁DESCRIBE
- ▁BEHELD
- ▁COMMUNITY
- ▁DESPERATE
- ▁DISPLAY
- ▁ENEMIES
- ▁MELANCHOLY
- ▁MIRROR
- ▁RECOMMEND
- ▁SPANISH
- ▁BLAME
- ▁VOLUME
- ▁SHOOT
- ▁COMBIN
- ▁SHAKING
- ▁SOUTHERN
- ▁MYSTERY
- ▁EVERYONE
- ▁COMMISSION
- ▁COMPOSED
- ▁UDO
- ▁IMAGE
- ▁DECEIV
- ▁FAILURE
- ▁PATTY
- ▁ALICE
- ▁FRAME
- ▁MODEST
- ▁MAGNIFICENT
- ▁BRANCHES
- ▁REIGN
- ▁RAG
- ▁PARISH
- ▁KATE
- ▁AMID
- ▁SLEEPING
- ▁ANNOUNCED
- ▁EAGERLY
- ▁WIRE
- ▁LAP
- ▁ARAB
- ▁EATING
- ▁RUM
- ▁CAREFUL
- ▁DISCUSS
- WORTH
- ▁DISTRICT
- ▁FOREHEAD
- ▁FRANCIS
- ▁INCIDENT
- ▁APPEAL
- ▁EMBARRASS
- ▁MAINTAIN
- ▁PRONOUNC
- ▁FURNISH
- ▁STRAIN
- ▁ELEMENT
- ▁SILK
- ▁FEAST
- ▁RECENT
- ▁DANCING
- ▁LODGE
- ▁ASHAMED
- ▁TRICK
- ▁BOBO
- ▁STUFF
- ▁ET
- ▁ASSERT
- ▁SANK
- ▁TREATMENT
- ECI
- ▁SWIM
- ▁BECOMING
- ▁SINGING
- ▁PLATE
- ▁SCATTERED
- ▁EXTREMELY
- ▁GRIM
- ▁SANG
- ▁FIGHTING
- ▁FACTOR
- ▁PAINFUL
- ▁HIDE
- ▁FUNN
- ▁AFTERWARD
- ▁FROG
- ▁VENTURE
- ▁DISAPPOINT
- ▁COMRADE
- ▁MONSIEUR
- ▁OBVIOUS
- ▁PASSENGER
- ▁PROFOUND
- ▁PUBLISH
- ▁ACCUSTOM
- ▁BLOOM
- ▁SMITH
- ▁RELATIVE
- ▁ACCUSE
- ▁MANIFEST
- ▁SOLID
- ▁MONSTER
- ▁MARIUS
- ▁CANDLE
- ▁PROCUR
- ▁INTERFERE
- ▁HOUSEHOLD
- ▁DEVELOPMENT
- ▁AGREEABLE
- ▁HALT
- ▁NECESSITY
- FOLD
- ▁CITIES
- ▁REGI
- ▁GLOOMY
- BBL
- ▁SEPARATED
- ▁CHEST
- ▁STRIP
- ▁SPAR
- ▁DUN
- ▁SETTLE
- ▁STARED
- ▁HANGING
- ▁FEATURES
- ▁PILE
- ▁ORIGIN
- ARIES
- ▁LION
- ▁ALI
- ▁ASTONISHMENT
- ▁COMPLIMENT
- ▁DELICATE
- ▁COUNSEL
- ▁FIFTH
- ▁SUPPRESS
- ▁BURDEN
- ▁COMPLEX
- ▁ADDITION
- ▁CRUSH
- ▁TWIST
- ▁PIANO
- ▁BRUSH
- ▁CHECK
- ▁ANNIE
- ▁SHELTER
- ▁IMPROV
- ▁WESTERN
- ▁LOCAL
- ▁APPLE
- ▁GREET
- ▁MASK
- ▁RUSSIAN
- ▁TOWER
- ▁CREW
- ▁TIP
- ▁WANDERING
- ▁READER
- ▁WANDERED
- ▁DESTROY
- ▁OBSERVE
- MORE
- ▁ESCAPED
- ▁PET
- ▁BUILD
- ▁REAR
- ▁DESTROYED
- HIN
- ▁OWE
- ▁RANG
- ▁TEAR
- ▁NED
- ▁OFFICER
- ▁TRAP
- ▁OCCUR
- ▁APPOINTED
- ▁ATMOSPHERE
- ▁CHOOSE
- ▁CONCLUSION
- ▁CULTIVAT
- ▁DESCRIPTION
- ▁ENORMOUS
- ▁EXHAUSTED
- ▁LANDSCAPE
- ▁NATASHA
- ▁PROSPECT
- ▁REFRESH
- ▁SPECIES
- ▁SURROUNDED
- ▁WEAPON
- ▁BLANK
- ▁DEFEND
- ▁EDITH
- ▁HORRIBL
- ▁BETRAY
- ▁FERKO
- ▁LABOUR
- ▁NEGRO
- ▁RESUMED
- ▁LEAF
- ▁MUSKET
- ▁INTENSE
- ▁MERCY
- ▁ADOPT
- ▁SCORE
- ▁DASH
- ▁LAWYER
- ▁SLOPE
- ▁CHUCK
- ▁ASSISTANCE
- ▁BROOK
- ▁BREAKING
- ▁ASSIST
- ▁GROAN
- ▁HELEN
- ▁BEHAV
- ▁MAIDEN
- ▁CRIS
- ▁SHOUTING
- ▁NAY
- ▁PIG
- ▁ACCORDINGLY
- ETTE
- ▁DESIR
- ▁RUB
- ▁GRU
- ▁PIT
- ▁HEAVI
- ▁OBTAINED
- ▁SPARE
- ▁BRANCH
- ▁COUNTER
- ▁APART
- ▁AMBITION
- ▁ASTONISHED
- ▁CORRESPOND
- ▁DRIVING
- ▁ENERGY
- ▁HISTORIAN
- ▁REVOLUTION
- ▁SWEEP
- ▁TREMBLING
- ▁CRAFT
- ▁FAMILIES
- ▁LITERATURE
- SBURG
- ▁FEMALE
- ▁TILNEY
- ▁GENEROUS
- ▁SUBMIT
- ▁INTELLECTUAL
- ▁ORCHARD
- ▁STORIES
- ▁DIANA
- ▁VEIN
- ▁TRIFL
- ▁TWIN
- ▁WORSHIP
- ▁MARBLE
- ▁GALLANT
- ▁SENSIBLE
- ▁NEAT
- ▁BROWNIE
- ▁JUNE
- ▁SHAW
- ▁WORST
- ▁USELESS
- ▁FISHING
- ▁CRYING
- ▁MAYBE
- ▁VARI
- ▁PRESERVE
- ▁VOL
- ▁EMPLOY
- ▁INTERRUPT
- ▁SLIGHTLY
- ▁ACCOMPLISHED
- NEY
- ▁STEAM
- ▁BALANC
- ▁LEANING
- ▁SIGHED
- ▁REFUSE
- ▁IMAGINED
- ▁DATE
- GROUND
- ▁ENTERTAIN
- ▁PERCEIVE
- ▁ABROAD
- ▁CHEESE
- ▁DESTRUCTION
- ▁ESSENTIAL
- ▁EXPEDITION
- ▁GRANDFATHER
- ▁INFINITE
- ▁LIBRARY
- ▁MULTITUDE
- ▁NEGLECT
- ▁SWALLOW
- ▁VILLEFORT
- ▁BELOVED
- ▁COMMITTEE
- ▁CONFIDENT
- ▁PURPLE
- ▁PURCHAS
- ▁SCRAP
- ▁SPOIL
- ▁LIKEWISE
- ▁EXTRA
- ▁STRAW
- ▁SALUT
- ▁SOURCE
- ▁HASTENED
- ▁RESENT
- ▁FLOCK
- ▁LOFT
- ▁FLO
- ▁CLO
- ▁CONVINCED
- ▁GOODNESS
- ▁HYPNOTIZ
- ▁SETTING
- ▁HAIL
- ▁PHI
- ▁GROVE
- ▁DISCOVERY
- ▁DAMP
- ▁WHISPER
- ▁LIFT
- ▁HOP
- ▁SUSPECTED
- ▁SCR
- OLI
- ▁FAC
- ▁BUSH
- ▁FOREVER
- ▁BARRICADE
- ▁CONSTITUTION
- ▁ENDEAVOR
- ▁ENTHUSIASM
- ▁EXECUTION
- ▁HYACINTH
- ▁PERCEVAL
- ▁PSYCHE
- ▁REPROACH
- ▁THIRTEEN
- ▁ABSORB
- ▁GRATITUDE
- ▁MERCER
- ▁REPUTATION
- ▁SCREAM
- ▁PUPIL
- ▁RETIRED
- ▁STEEP
- ▁SUMMIT
- ▁MISERABLE
- ▁STRICT
- ▁MINGLED
- ▁DEFEAT
- ▁REVEAL
- ▁LOVING
- ▁GOOSE
- ▁ECHO
- ▁AWAIT
- ▁MOOD
- ▁CRAWLEY
- ▁CELL
- ▁ENGAGEMENT
- ▁PRECED
- ▁SOMEONE
- ▁ARRANGEMENT
- ▁PICKET
- ▁GASP
- ▁HUMOR
- ▁INVITATION
- ▁JOB
- WITHSTAND
- ▁LAMENT
- ▁CLASSES
- ▁HUNGER
- ▁DISPOSED
- ▁STEAMER
- ▁FEARFUL
- ▁GER
- ▁FINAL
- ▁FLAG
- ▁JULY
- ▁DIG
- WORK
- ▁OPPOS
- ▁ANXIETY
- ▁AUDIENCE
- ▁BACHELOR
- ▁COLUMN
- ▁HANDKERCHIEF
- ▁IMPATIENT
- ▁JUDGMENT
- ▁KNIFE
- ▁SOVEREIGN
- ▁STRIKING
- ▁THOMPSON
- ▁EMPIRE
- ▁FULFIL
- ▁CONSULT
- ▁JENNY
- ▁THENARDIER
- ▁POYSER
- ▁FOURTEEN
- ▁JAPANESE
- ▁INDULG
- ▁MARTIAN
- ▁COUNTRIES
- ▁FETCH
- ▁CRITIC
- ▁ROBBER
- ▁CROOK
- ▁DEPARTURE
- ▁MABEL
- ▁PREACH
- ESCENT
- ▁WHIP
- ▁NAIL
- ▁DELIGHTFUL
- ▁DISCUSSION
- ▁SENTENCE
- ▁LANE
- ▁ENGINEER
- ▁ARRANGED
- MMY
- ▁LEST
- ▁RENT
- MMED
- ▁LIST
- ▁ROBE
- ▁MISSION
- ▁GRACEFUL
- ▁LIGHTN
- STONE
- COURT
- ▁CONCEPTION
- ▁CONTRACT
- ▁DROWN
- ▁EXPERIMENT
- ▁HITHERTO
- ▁PLAGUE
- ▁PORTHOS
- ▁SHRIEK
- ▁DETECT
- ▁ACCENT
- ▁ERECT
- ▁SAZEN
- ▁PROFIT
- ▁VIVID
- ▁SQUIRE
- ▁OPERATION
- ▁SMELL
- ▁SIMON
- ▁EXTENT
- ▁KEEN
- ▁EMERG
- ▁REVIV
- ▁REGIMENT
- ▁DISAPPOINTMENT
- ▁STOLE
- ▁DIVINE
- ▁GUILTY
- ▁COWARD
- ▁EXPECTATION
- ▁SIGNOR
- ▁MODE
- ▁CENTRE
- ▁FIL
- HOW
- ▁WEARI
- ▁TOTAL
- ▁VICTOR
- ▁GOVERN
- ▁RAISE
- ▁ABANDON
- ▁ABSURD
- ▁ASPECT
- ▁CRIMINAL
- ▁DEFINITE
- ▁DELIBERAT
- ▁FEATHER
- ▁FLORINA
- ▁MIDNIGHT
- ▁RICHMOND
- ▁SATISFY
- ▁SINGULAR
- ▁STEADILY
- ▁SUPREME
- ▁TIMBER
- ▁PSYCHOLOG
- ▁GESTURE
- ▁VALUABLE
- ▁INTERVAL
- ▁CONFUSION
- ▁FLUTTER
- ▁SACRED
- ▁DISEASE
- ▁UNDERTAKE
- ▁PENETRAT
- ▁MARVEL
- ▁NORTHERN
- ▁GRIEV
- ▁GENIUS
- ▁SADDLE
- ▁NOVEL
- ▁MISERY
- ▁CONVICTION
- ▁SINK
- ▁WAGON
- ▁ARISE
- ▁COMMENT
- ▁BARN
- UPON
- ▁FENCE
- ▁ASSOCIATION
- ▁BONES
- ▁IDLE
- ▁DOUBTFUL
- ▁PREPARATION
- IZZ
- ▁RAIS
- ▁BITTERLY
- ▁JOE
- ▁RELI
- ADI
- ▁METAL
- ▁EXACT
- ▁GLOOM
- FIELD
- ▁DANGLARS
- ▁DISGRACE
- ▁EXAMINATION
- ▁FASCINAT
- ▁GLITTER
- ▁INCREASING
- ▁MESSENGER
- ▁PATRIOT
- ▁PLATFORM
- ▁PROVISION
- ▁QUALITIES
- ▁SELECT
- ▁STEADY
- ▁POVERTY
- ▁POWDER
- ▁PROPHET
- ▁HOLLAND
- ▁TRUNK
- ▁VARIETY
- ▁PLANCHET
- ▁CONQUER
- ▁CONCEIVE
- ▁COMBAT
- ▁STOOP
- ▁SHIRT
- ▁GENERATION
- ▁COMMITTED
- ▁INSULT
- ▁CONFUSED
- ▁RADIAN
- ▁DEBT
- ▁IMITAT
- ▁DART
- ▁CAROLINE
- ▁SWAM
- ▁WREN
- ▁CHILDHOOD
- ▁BRAND
- ▁JOKE
- ▁FRIENDSHIP
- ▁DIRT
- ▁JOLL
- ▁BUSHES
- ▁MINK
- ▁ROUT
- ▁EQUALITY
- ▁HESITATED
- ▁BARK
- ▁ANTI
- ▁STATEMENT
- PHER
- ▁SUNK
- ▁DAT
- ▁BACKWARD
- ▁SUSPECT
- ▁OBJECTION
- ▁RAP
- ▁CHIN
- ▁MATE
- ▁REDUC
- ▁GREGG
- ▁ACCOMPANY
- ▁ANYWHERE
- ▁BENEFIT
- ▁CLERK
- ▁EXPENSE
- ▁FETNAH
- ▁INTERPRET
- ▁LUKASHKA
- ▁NUMEROUS
- ▁SURGEON
- ▁PUZZL
- ▁RESCUE
- ▁GRATEFUL
- ▁APPROV
- ▁RIVAL
- ▁NIECE
- ▁FLOOD
- ▁VANISHED
- ▁ERROR
- ▁BLAZ
- ▁TUMBL
- ▁WENDY
- ▁PERSIST
- ▁CONSOL
- ▁SOAP
- ▁HUMOUR
- ▁FITTED
- ▁HOUSEKEEPER
- ▁ENABL
- ▁OCCASIONALLY
- ▁HATRED
- ▁SWELL
- ▁WORRY
- ▁RUST
- ▁PURSUIT
- ▁INTIMATE
- ▁SEAL
- ▁COLLECTION
- ▁TREMBLED
- ▁DENY
- ▁HUMANITY
- ▁FATAL
- ▁COCK
- ▁DRIVER
- ▁HOPELESS
- ▁MISTAKEN
- ▁LUC
- ▁ACCOMPLISH
- ▁COAL
- ▁ACCORD
- ▁PURSE
- ▁SEPARATE
- ▁ARRIVE
- ▁SMOK
- ▁MADAM
- ▁ASSOCIAT
- ▁INSTRUCT
- ▁CELEBR
- ▁CHANNEL
- ▁CIVILIZATION
- ▁DOCTRINE
- ▁ENDEAVOUR
- ▁GLACIER
- ▁INTELLIGENT
- ▁INVOLVE
- ▁LEATHER
- ▁MUTTERED
- ▁OLENIN
- ▁PENCROFT
- ▁PERPLEX
- ▁SPECTATOR
- ▁UNIVERSITY
- ▁ATTAIN
- ▁INEVITABL
- ▁YONDER
- ▁ENCHANT
- ▁REPAIR
- ▁CURRENT
- ▁ASCEND
- ▁CREEK
- ▁SPARKL
- ▁RUE
- ▁BEAVER
- ▁INFANT
- ▁CONTINUALLY
- ▁CLASP
- ▁IRISH
- ▁ROLLIN
- ▁PUNISHMENT
- ▁LUNCH
- ▁AGONY
- ▁RUDE
- ▁DRAGG
- ▁INQUIRI
- ▁SEX
- ▁TERRIFI
- ▁ROBIN
- ▁PROFESSIONAL
- ▁SPUR
- ▁GRAIN
- ▁VINE
- ▁PENN
- ▁ROC
- ▁CHASE
- ▁INFORM
- ▁WRITER
- ▁AVO
- ▁TAP
- ▁CREAT
- ▁WHIL
- ▁BARR
- ▁ASSURE
- ▁CIRCUMSTANCE
- ▁OIL
- ▁ROUSE
- ▁COLUMB
- ▁CUNNING
- ▁DOMESTIC
- ▁GLORIOUS
- ▁INDIGNATION
- ▁PRECISELY
- ▁PRUDENCE
- ▁RAILROAD
- ▁SATURDAY
- ▁UTMOST
- ▁VIOLENCE
- ▁WHIRL
- ▁CALCULAT
- ▁OVERWHELM
- ▁PERPETUAL
- ▁QUARLES
- ▁SLENDER
- ▁TELEGRAPH
- ▁ALOUD
- ▁OPPRESS
- ▁CROPPER
- ▁CANADIAN
- ▁HERBERT
- ▁TIMID
- ▁SUPPLY
- ▁STROLL
- ▁CREEP
- ▁OATH
- ▁DUSK
- ▁EXCESS
- ▁HUMBLE
- ▁FURIOUS
- ▁RIDGE
- ▁BULLET
- ▁PONY
- ▁STATU
- ▁ENJOYMENT
- ▁CONWAY
- ▁DIFFICULTIES
- ▁PATCH
- ▁JOYCE
- ▁CLOCK
- ▁RESTORED
- ▁ARGU
- ▁WIG
- ▁CHATT
- ▁PLAC
- ▁REMOVE
- ▁TORN
- ▁DISAPPEAR
- TIME
- WELL
- ▁RECOGNIZE
- ▁FISHE
- ▁DECLARE
- ISTIC
- ▁AUTHOR
- ▁WHISK
- ▁COFFEE
- ▁COMPREHEND
- ▁DISGUISE
- ▁ELZEVIR
- ▁ENTERPRISE
- ▁HOLIDAY
- ▁HORIZON
- ▁IGNORANT
- ▁INTERVIEW
- ▁OLIVER
- ▁RONICKY
- ▁CAPACITY
- ▁DISPOSITION
- ▁EXTERNAL
- ▁OPPOSITION
- ▁REPUBLIC
- ▁WHEAT
- ▁CORPSE
- ▁DARLING
- ▁THRILL
- ▁INHABITANTS
- ▁ORNAMENT
- ▁SHIFT
- ▁RECOGNISE
- ▁SHIVER
- ▁BOAST
- ▁HINT
- ▁BOSTON
- ▁MULTI
- IFYING
- ▁STEAL
- ▁INSTRUCTIONS
- ▁ELECTRIC
- ▁SWING
- ▁SOOTH
- ▁SCALE
- ▁MORLAND
- ▁DISLIKE
- ▁FLATTER
- ▁COACH
- ▁LEIF
- ▁STAMP
- ▁ANYHOW
- ▁MOTIONLESS
- ▁ANDREA
- ▁LOSING
- ▁PAUL
- ▁CAROL
- ▁ADVANC
- ▁IMAGIN
- ▁CENTER
- ▁JAR
- ▁SUCCEED
- ▁DISMISS
- CTOR
- ▁RECEIV
- ▁DRAG
- ▁INTENT
- ▁BARBAR
- ▁PUNISH
- ▁ABRUPTLY
- ▁BERNARD
- ▁DECISION
- ▁INDEPENDENT
- ▁PROVINCE
- ▁SLEEVE
- ▁TREMENDOUS
- ▁UNPLEASANT
- ▁LEISURE
- ▁THRONG
- ▁THUMB
- ▁BANNER
- ▁CONTRADICT
- ▁RESTRAIN
- ▁DIVIDED
- ▁WRAPPED
- ▁HAUNT
- ▁SNEER
- CHESTER
- ▁JULIA
- ▁MILD
- ▁CONTACT
- ▁MEANTIME
- ▁NEEDLE
- ▁BLOT
- ▁BARREL
- ▁ISABELLA
- ▁THEATRE
- ▁ESTABLISHMENT
- ▁MARKET
- ▁CHINA
- ▁FORBID
- ▁PERISH
- ▁DOORWAY
- ▁CARLING
- ▁PERIL
- ▁PRIZE
- ▁HATCH
- ▁CURL
- ▁REFER
- ▁DEVOT
- EMBER
- MONT
- ▁CANOE
- ▁PROFESSION
- ▁CONVICT
- ▁CRAWL
- ▁ACTIVITY
- ▁BEWILDER
- ▁BREEZE
- ▁CONTEMPLAT
- ▁DISGUST
- ▁FATIGUE
- ▁MERRICK
- ▁PRAIRIE
- ▁REFORM
- ▁SPECTACLE
- ▁STUDENT
- ▁TUMULT
- ▁UNIFORM
- ▁VIGOROUS
- ▁CONDEMN
- ▁GENUINE
- ▁THOMAS
- ▁ARROW
- ▁PILLOW
- ▁FEEBLE
- ▁RALPH
- ▁SCHEME
- ▁COLLAR
- ▁JUSTINIAN
- ▁NERVE
- ▁OYSTER
- ▁BENNET
- ▁DUTIES
- ▁BINGLEY
- ▁CHRISTMAS
- ▁CONVEY
- ▁DESPIS
- ▁RATTL
- ▁GARMENTS
- ▁GOWN
- ▁BERYL
- ▁BARRIER
- ▁CHARACTERISTIC
- ▁MEDITAT
- ▁DISCOURSE
- ▁STAFF
- ▁KARA
- ▁MONTE
- ▁READILY
- ▁VENTUR
- ▁HENCE
- ▁ROPE
- ▁CRIES
- ▁ANGLE
- ▁RESPECTABLE
- ▁MOAN
- ▁OUTLINE
- BORN
- ▁FIX
- ▁INTEND
- LIA
- ▁CHILL
- ▁CREP
- ▁CHOSE
- ▁SPECULAT
- ▁ATTRIBUT
- ▁BUFFALO
- ▁ENTREAT
- ▁ENVELOP
- ▁FREDERICK
- ▁IMPATIENCE
- ▁INDIFFERENCE
- ▁INDUSTRY
- ▁INSTITUTION
- ▁LYNDE
- ▁RETAIN
- ▁TROUTINA
- ▁UNCOMFORTABL
- ▁VENGEANCE
- ▁JENKS
- ▁CONGRESS
- ▁SMART
- ▁THITHER
- ▁DISAGREE
- ▁IMPROVEMENT
- ▁PISTOL
- ▁GOSSIP
- ▁ETERNAL
- ▁BELIEF
- ▁SLEDGE
- ▁AROUSED
- ▁ORANGE
- ▁FASTENED
- ▁MONKEY
- ▁WITHDREW
- ▁OFFEND
- ▁PIERC
- ▁MOONLIGHT
- ▁OARS
- ▁GROOM
- ▁FIDDLER
- ▁BARBARA
- SHIRE
- ▁ATTENDANT
- ▁DIVERS
- ▁DUCK
- ▁PROPOSAL
- ▁GROWTH
- ▁CURATE
- ▁STEWAR
- ▁MOCK
- ▁SUCCESSION
- ▁CREATION
- ▁PARTIAL
- ▁SWU
- ▁FROST
- ▁EIGHTH
- ▁AWE
- ▁PERCH
- ▁LACE
- SPOON
- ▁ARRANGE
- SERIES
- ▁FOG
- ▁SCU
- ▁ABRAHAM
- ▁ADMIRAL
- ▁BARBICANE
- ▁CAMPAIGN
- ▁CONSEQUENTLY
- ▁CULTURE
- ▁GRAMMONT
- ▁GWYNPLAINE
- ▁HAPPILY
- ▁HOOPDRIVER
- ▁INDEPENDENCE
- ▁LEOPOLD
- ▁MISCHIEF
- ▁MONTGOMERY
- ▁NECESSARILY
- ▁PSYCHIC
- ▁RABBIT
- ▁REFUGE
- ▁RESPONSIBILIT
- ▁SENATOR
- ▁UNCERTAIN
- ▁MENSTRUA
- ▁FANNY
- ▁SUBSTANCE
- ▁APRIL
- ▁ELBOW
- ▁QUALITY
- ▁BORDER
- ▁BRUTAL
- ▁CARPET
- ▁SOLITAR
- ▁FROWN
- ▁SCENT
- ▁ANNOY
- ▁NAKED
- ▁BOSOM
- ▁CONSUM
- ▁TIGER
- ▁ITALIAN
- ▁PARSON
- ▁DECLIN
- ▁NEIGHBORHOOD
- ▁GREGGORY
- ▁EXCEED
- ▁SILLY
- ▁ICELAND
- ▁HIDEOUS
- ▁STRU
- ▁ALTERNAT
- ▁CABINET
- ▁ABILITY
- ▁BEECH
- ▁SECRETARY
- ▁CONTEST
- ▁MONK
- ▁PADD
- ▁EVA
- ▁CREST
- ▁FINISH
- ▁APPARENT
- ▁MIX
- ▁SLIP
- ▁LUXURI
- ▁AUTUMN
- ▁CIRCULAR
- ▁COMPOSITION
- ▁DISPLEAS
- ▁EXCELLENC
- ▁FURNITURE
- ▁GRADUATE
- ▁INDIFFERENT
- ▁JOSEPH
- ▁OCCUPATION
- ▁POSSIBILITY
- ▁RENEWED
- ▁RESPONDED
- ▁PREVAIL
- ▁HOARSE
- ▁PRACTIS
- ▁FAREWELL
- ▁JULIET
- ▁OVERHEAD
- ▁THREAD
- ▁APPLICATION
- ▁SOLITUDE
- ▁ADAPT
- ▁FALK
- ▁LARK
- ▁COARSE
- ▁MANKIND
- ▁KICK
- ▁BATTER
- ▁SOLICIT
- ▁RESIGN
- ▁MOTOR
- ▁STEEL
- ▁CONTRIV
- ▁AUTHORITIES
- ▁HARSH
- ▁FAVORITE
- ▁TALENT
- ▁FLEECE
- ▁AGITATION
- ▁ABBE
- ▁STUCK
- ▁HEDGE
- ▁BIBLE
- ▁RECOLLECTION
- ▁PARTNER
- ▁DAMON
- ▁SHINE
- ▁HOOK
- ▁CONFESSION
- ▁ASSENT
- ▁ELDE
- ▁BIGGE
- ▁PEACEFUL
- SCRIBED
- ▁WEIGH
- CARLET
- ▁DECIDE
- ▁RECOLLECT
- ▁BOHEMIA
- ▁CALIFORNIA
- ▁CONSTRUCT
- ▁DEMONSTRAT
- ▁DISTRIBUT
- ▁FRIGHTFUL
- ▁GNOME
- ▁IGNORANCE
- ▁JANUARY
- ▁JULIUS
- ▁MEMORIES
- ▁OCCUPY
- ▁PHRASE
- ▁WHIRLWIND
- ▁WILMINGTON
- ▁CARLINI
- ▁CHAUVELIN
- ▁ESTEEM
- ▁GENZABURO
- ▁GLOBE
- ▁LECOQ
- ▁MARGARET
- ▁MONARCH
- ▁NAPOLEON
- ▁SCORN
- ▁STAGGER
- ▁SUSTAIN
- ▁TRADITION
- ▁ADJUST
- ▁FROZEN
- ▁IMPRISON
- ▁LANTERN
- ▁MICHEL
- ▁STOMACH
- ▁TORRENT
- ▁WITHDRAW
- ▁FRANZ
- ▁POISON
- ▁SURVEY
- ▁BRITISH
- ▁ELEVAT
- ▁AWOKE
- ▁ESTHER
- ▁INHERIT
- ▁TRAVERS
- ▁STOPPING
- ▁IRELAND
- ▁COMPARATIVE
- ▁SOBB
- ▁FAVOURITE
- ▁CANVAS
- ▁CLOAK
- ▁GLAR
- ▁ASSISTANT
- ▁DAMAGE
- ▁PEAK
- ▁DISTINCTION
- FARE
- ▁DOLLAR
- ▁BEGGAR
- LUSIVE
- ▁MODEL
- ▁SECUR
- ▁DISPOS
- ▁SLID
- ▁PEA
- ▁SPEEDI
- HOLD
- ▁SNAP
- ▁CIGAR
- ▁AFFLICT
- ▁AMAZEMENT
- ▁LAUNCELOT
- ▁LEAGUE
- ▁MARIPOSA
- ▁POPULATION
- ▁UNEASY
- ▁BLOSSOM
- ▁CATERPILLAR
- ▁INCLINATION
- ▁SUSPEND
- ▁SYNDIC
- ▁TAYLOR
- ▁WILSON
- ▁CONTRAST
- ▁PORTRAIT
- ▁CORONER
- ▁GREEK
- ▁BUNDLE
- ▁BLEW
- ▁THORPE
- ▁ORPHAN
- ▁MUSCLE
- ▁DEAF
- ▁SURVIV
- ▁EXCEEDINGLY
- ▁TENDENC
- ▁ISRAEL
- ▁QUANTIT
- ▁PENSION
- ▁DRIED
- TEXT
- ▁REFERENCE
- ▁REPOSE
- ▁FOLLY
- ▁REPLACE
- ▁TERR
- ▁ANKLE
- ▁SUNLIGHT
- ▁SECURITY
- ▁SHOV
- ▁RAW
- CULAR
- ▁JACKET
- ▁TUNE
- ▁HOBB
- ▁MARTIN
- DUCED
- ▁FIST
- ▁BEGG
- ▁CHOK
- ▁INQUIRE
- ▁INTELLECT
- ▁AMUSEMENT
- ▁APPROPRIATE
- ▁CONGRATULAT
- ▁CONVENTION
- ▁DISCOURAG
- ▁EXQUISITE
- ▁FOUNTAIN
- ▁JUNIOR
- ▁NONSENSE
- ▁OBSTACLE
- ▁SPECIMEN
- ▁SWEAR
- ▁TRANQUIL
- ▁VEHICLE
- ▁WISDOM
- ▁ASCERTAIN
- ▁CAUTIOUS
- ▁CENTURIES
- ▁CORRUPT
- ▁EXPLOR
- ▁TURKEY
- ▁BARGAIN
- ▁CONFOUND
- ▁FUNCTION
- ▁GRACIOUS
- ▁MONICA
- ▁ILLUSTRAT
- ▁CRUMB
- ▁REMEDY
- ▁REMOTE
- ▁REVENGE
- ▁BABYLON
- ▁CAUTION
- ▁INTERIOR
- ▁CRISTEL
- ▁BRAZ
- ▁THIRST
- ▁PROBABLE
- ▁HARMONY
- ▁CHARITY
- ▁DECAY
- ▁COLONI
- ▁AVAIL
- ▁REPULS
- ▁ABSENT
- ▁PULSE
- ▁PRESUM
- ▁CRANE
- ▁NEIGHBOURHOOD
- ▁SUNSET
- ▁CANNON
- ▁GRAPE
- ▁SOFA
- ▁DRANK
- MINOUS
- ▁DECLARATION
- ▁CLOSING
- ▁MEEK
- ▁STARV
- ▁BUNCH
- ▁PERFORMANCE
- ▁ENTERTAINMENT
- ▁STRIV
- ▁EMILY
- ▁VALET
- MPOSED
- ▁INTIMA
- ▁POLISH
- ▁HIRE
- POST
- ▁TREMBLE
- ▁CEASE
- ▁VIRGIN
- ▁RUSSIA
- COURSE
- ▁EDUCAT
- BOUND
- ▁INHABIT
- ▁SUPERINTEND
- ▁BISCUIT
- ▁CHICAGO
- ▁CHOKICHI
- ▁CONFLICT
- ▁ENCLOS
- ▁EXCLUSION
- ▁EXECUTIVE
- ▁GRANDMOTHER
- ▁HEADQUARTERS
- ▁INFERIOR
- ▁INVISIBLE
- ▁MUTUAL
- ▁OPPONENT
- ▁SENSITIVE
- ▁STUDIED
- ▁TEMPORARY
- ▁UNWILLING
- ▁PERMANENT
- ▁BEDROOM
- ▁NOVEMBER
- ▁COMPLICAT
- ▁DEVOUR
- ▁SCRAMBL
- ▁SECTION
- ▁PROPOSITION
- ▁DEPRIV
- ▁RYNCH
- ▁PLEAD
- ▁TORTURE
- ▁SCOUT
- ▁PILOT
- ▁CHERISH
- ▁SPEAR
- ▁SUGAR
- ▁JASPER
- ▁STRAY
- ▁RIFLE
- ▁NORMAL
- ▁JERK
- ▁HONEY
- ▁AWAKENED
- ▁QUIVER
- ▁PYE
- ▁APPLY
- LICK
- JA
- ▁ANNOUNC
- FORE
- ▁ENGINE
- ▁HESITATE
- ▁PROVIDE
- ▁REALIZE
- ▁SEIZE
- ▁RESTORE
- MOUTH
- FOOT
- ▁DIFFER
- ▁ULTIMATE
- ▁ABUNDANCE
- ▁APPRECIATE
- ▁APPREHENSION
- ▁AVENUE
- ▁AWKWARD
- ▁CETERA
- ▁CHIMNEY
- ▁CLUTCH
- ▁CONVENIENT
- ▁CORRIDOR
- ▁DISTRACT
- ▁ELEGANT
- ▁ELSEWHERE
- ▁ENTHUSIASTIC
- ▁EXECUTE
- ▁EXTREMIT
- ▁JERUSALEM
- ▁MIRACLE
- ▁MONSTROUS
- ▁OBEDIENCE
- ▁OBSCURE
- ▁PHENOMENA
- ▁RESIDENCE
- ▁RESOURCE
- ▁REVOLT
- ▁SCIENTIFIC
- ▁SHIELD
- ▁SIMPSON
- ▁UNIVERSE
- VOLUNTARY
- ▁ATTENTIVE
- ▁BRENDA
- ▁DEPOSIT
- ▁MAXIM
- ▁REJECT
- ▁STIRRED
- ▁DISORDER
- ▁SERENE
- ▁TOBACCO
- ▁MILTON
- ▁BALLOON
- ▁STEPHEN
- ▁STRAIT
- ▁CHINESE
- ▁COURTEOUS
- ▁RELEASE
- ▁RECESS
- ▁COTTON
- ▁STUMP
- ▁TANK
- ▁PROMOTE
- ▁DERIVE
- ▁LOYAL
- ▁GRANIT
- ▁DISMAL
- ▁CATTLE
- ▁DOONE
- ▁CUPID
- DIGNIFIED
- ▁RIPE
- ▁EXILE
- ▁ANTIQU
- UMINAT
- ▁SUPPOS
- ▁WRETCH
- ▁IDENTI
- ▁EASI
- ▁SERV
- ▁QUEST
- TOWN
- ▁ACHIEVEMENT
- ▁APPETITE
- ▁BUCCANEER
- ▁COMMENCED
- ▁DELAWARE
- ▁DISCERN
- ▁IMMORTAL
- ▁INDIGNANT
- ▁JOSIANA
- ▁MECHANICAL
- ▁MUSKRAT
- ▁REVIEW
- ▁ROBARTS
- ▁SIGNIFICANT
- ▁SUBSEQUENT
- ▁YOURSELVES
- ▁ANGRILY
- ▁BORROW
- ▁SUBLIME
- ▁AFRICA
- ▁CHICKEN
- ▁DEGRAD
- ▁GEORGI
- ▁HUMILIAT
- ▁LODGING
- ▁REDCOAT
- ▁VIOLET
- ▁HOPKINS
- ▁RAWDON
- ▁PRICK
- ▁WHALE
- ▁FUNERAL
- ▁GUINEA
- ▁DISMAY
- ▁PORCH
- ▁HARVEST
- ▁PARCEL
- ▁SUBDU
- ▁SYRIA
- ▁PANIC
- ▁BOUGHS
- ▁CIGARETTE
- ▁CHRON
- ▁INQUIRY
- ▁CRYSTAL
- ▁SPELL
- ▁PLUCK
- ▁PATTERN
- ▁DARING
- ▁CRITICISM
- ▁DAINT
- ▁DISTURBANCE
- ▁BUTCHER
- ▁LITERA
- ▁ABUSE
- IXTURE
- ▁ANIMAT
- ▁WRIT
- ▁BELIEV
- ▁INDUCE
- COMING
- ▁DRAMA
- ▁AGITAT
- SHAW
- ▁IMPERFECT
- ▁MANUFACTURE
- ▁AFFIRM
- ▁ANGUISH
- ▁ARTIFICIAL
- ▁BIBBS
- ▁CHARLOTTE
- ▁CIRCUS
- ▁CONNISTON
- ▁CONSTITUTE
- ▁DAZZL
- ▁DEFECT
- ▁DISCHARG
- ▁ESCORT
- ▁EXAGGERAT
- ▁GWENDOLEN
- ▁IRRESISTIBL
- ▁PHILOSOPHY
- ▁PHOTOGRAPH
- ▁PILGRIM
- ▁PLEASING
- ▁QUIXOTE
- ▁RESPONSE
- ▁SCRATCH
- ▁SERGEANT
- ▁SHERIFF
- ▁SHUDDER
- ▁STRUCTURE
- ▁SUFFRAGE
- ▁SURRENDER
- ▁SWORE
- ▁VILLAIN
- ▁HESITATING
- ▁FLORENCE
- ▁IRRITAT
- ▁RIGID
- ▁SINISTER
- ▁STUDIO
- ▁RAFT
- ▁CHAMPION
- ▁PAVEMENT
- ▁WOLF
- ▁DEVICE
- ▁WRECK
- ▁HESITATION
- ▁LAZY
- ▁ADJO
- ▁DECENT
- ▁INTERVEN
- ▁WOOL
- ▁ILLUSION
- ▁HAWK
- ▁IMPART
- ▁LUNGS
- ▁WINNING
- ▁VITAL
- ▁CONSPI
- ▁SUBTLE
- ▁CONSTANC
- ▁HURL
- ▁AMIABL
- ▁FOLK
- GGY
- ▁NECESSIT
- ▁PROFESS
- WASH
- ▁ADMIRING
- ▁AMBITIOUS
- ▁ANTHONY
- ▁CEREMONY
- ▁CONTRIBUTE
- ▁CRAGGS
- ▁DETAIN
- ▁DISCLOS
- ▁DWELT
- ▁EGYPT
- ▁FELIX
- ▁JOURNAL
- ▁KWAIRYO
- ▁LIBERAL
- ▁LUMBER
- ▁OCTOBER
- ▁ORGANIZATION
- ▁POPULACE
- ▁PRECAUTION
- ▁PREJUDICE
- ▁PROCLAIM
- ▁PROPRIETOR
- ▁RESPONSIBLE
- ▁RHYTHM
- ▁RIDICULOUS
- ▁SCHOLAR
- ▁SQUEEZ
- ▁SUBSTITUTE
- ▁SURPASS
- ▁THRESHOLD
- ▁WHARTON
- ▁FLICKER
- ▁AMAZED
- ▁BRONZE
- ▁COSSACK
- ▁SPILETT
- ▁CASUAL
- ▁DARCY
- ▁PARLOUR
- ▁SEXUAL
- ▁INSECT
- ▁NATHAN
- ▁EMINENT
- ▁PENCIL
- ▁PETITION
- ▁ROTTEN
- ▁VIGIL
- ▁CAESAR
- ▁EAGLE
- ▁TREAD
- ▁REACTION
- ▁TACIT
- ▁PARLOR
- ▁SPAIN
- ▁WILDERNESS
- ▁DICTAT
- ▁GRATIFY
- ▁STOVE
- ▁SKIRT
- ▁UTILI
- ▁CONCERT
- ▁GORGE
- ▁DECORAT
- ▁LATIN
- ▁ANCHOR
- ▁KNOT
- ▁MONDAY
- ▁GABLES
- ▁TOLERABL
- ▁ROGER
- BERRIES
- ▁INVAD
- IMMER
- OMETER
- ▁PRODUC
- OBIL
- ▁PERMISSI
- FICIENCY
- ▁WANDER
- RREL
- PIECE
- HORN
- ▁COMMIT
- ▁ACCUMULAT
- ▁JAPAN
- ▁ABUNDANT
- ▁ACADEMY
- ▁ALBERT
- ▁BANQUET
- ▁DELICIOUS
- ▁DOCUMENT
- ▁EXCLAMATION
- ▁FEBRUARY
- ▁GROTESQUE
- ▁HEATHERSTONE
- ▁HUMPHREY
- ▁HURSTWOOD
- ▁MOHAMMED
- ▁MOSCOW
- ▁NICHOLAS
- ▁OBSTINATE
- ▁PHANTOM
- ▁PHILOSOPHER
- ▁RECEPTION
- ▁SPANIARD
- ▁SWOLLEN
- ▁TELEPHONE
- ▁TRIBUTE
- ▁TUNNEL
- ▁UNREASONABL
- ▁WIGWAM
- ▁BUTTERFLY
- ▁COLLINS
- ▁DISPATCH
- ▁EDITOR
- ▁CONTINENT
- ▁DIMINISH
- ▁HORRID
- ▁KEATS
- ▁PROVIDENCE
- ▁BEHALF
- ▁CHARLEY
- ▁DRAKE
- ▁LAUNCH
- ▁SALOON
- ▁GIGANT
- ▁DISPUTE
- ▁HYSTERI
- ▁DEFENCE
- ▁SCREEN
- ▁VAULT
- ▁NINTH
- ▁HARBOR
- ▁FLANK
- ▁SPECK
- ▁UPRIGHT
- ▁KEMP
- ▁CANADA
- ▁STALK
- ▁OWL
- ▁BRUTE
- ▁FERRIS
- ▁DECREE
- ▁HABITUAL
- ▁BRISK
- ▁INSPIRE
- ▁HUSH
- ▁CROUCH
- ▁FRIDAY
- ▁MOUNTAINEER
- ▁HISTORIC
- ▁BATES
- ▁RUSK
- ▁SEMI
- DICTION
- ▁BUSI
- ▁REMOV
- MMI
- ▁SUFFIC
- ▁FLEE
- ▁LOUIS
- NLEA
- ▁IMPORT
- OLOGY
- ▁CLERGY
- ▁ADVERTISEMENT
- ▁BENEVOLEN
- ▁BORODINO
- ▁CATHOLIC
- ▁COMMERCIAL
- ▁CONJECTURE
- ▁CURTAIN
- ▁CUTHBERT
- ▁DEMOCRACY
- ▁GUARANTEE
- ▁HYPNOSIS
- ▁INDEFINITE
- ▁INVESTIGATION
- ▁IRREGULAR
- ▁KOYO
- ▁MERRIWIG
- ▁MIRANDA
- ▁NICHOLL
- ▁ONLOOKER
- ▁PERSECUT
- ▁RECOGNITION
- ▁REJOICE
- ▁REMEMBRANCE
- ▁REVELATION
- ▁SCOLD
- ▁SENIOR
- ▁SQUIRREL
- ▁SYMPATHETIC
- ▁TEMPEST
- ▁TREACHER
- ▁UNDERNEATH
- ▁UNEASINESS
- ▁UNNECESSARY
- ▁UPSTAIRS
- ▁VEXATION
- ▁ACCESS
- ▁CHEAP
- ▁ESTIMATE
- ▁HAZARD
- ▁HORSEBACK
- ▁PLUNDER
- ▁RASCAL
- ▁ROSTOV
- ▁ACCUR
- ▁GRAVITY
- ▁SITUATED
- ▁INVARIABL
- ▁PLENTIFUL
- ▁SPENCER
- ▁WALLACE
- ▁POLICY
- ▁WARRANT
- ▁ENVY
- ▁LAMB
- ▁EXTRACT
- ▁CORRAL
- ▁PANEL
- ▁LINK
- ▁LILIES
- ▁BECKON
- ▁SENOR
- ▁BORG
- ▁DEBATE
- ▁STEER
- COGNI
- COMB
- ▁SETTL
- ▁VENERA
- ▁FEATURE
- ▁TERRIBL
- CAPABLE
- OLOGICAL
- ▁INCESSANT
- ▁RESOLUTE
- SHAUGHNESSY
- ▁ABOLITION
- ▁ASSASSIN
- ▁BEHAVIOUR
- ▁BLUNT
- ▁COMMERCE
- ▁CONSTANTINOPLE
- ▁CRICKET
- ▁DISCIPLINE
- ▁DROUET
- ▁DWARF
- ▁INJUSTICE
- ▁LUXURY
- ▁MANUSCRIPT
- ▁MISUNDERSTAND
- ▁POLITICIAN
- ▁REDOUBT
- ▁SALVATION
- ▁SERMON
- ▁STRUGGLING
- ▁SURPRISING
- ▁TRIGGER
- ▁TUESDAY
- ▁TWILIGHT
- ▁UNDOUBTEDLY
- ▁VEGETABLE
- ▁VULGAR
- ▁WAISTCOAT
- ▁WRINKLE
- ▁ALEXANDER
- ▁CEILING
- ▁ECONOMIC
- ▁EVERLASTING
- ▁INFLICT
- ▁LEVISON
- ▁LOBSTER
- ▁OVERFLOW
- ▁SNATCH
- ▁TRAGEDY
- ▁DEASEY
- ▁ENLIGHTEN
- ▁FRIGATE
- ▁INSPECT
- ▁MARVELLOUS
- ▁ATLANTIC
- ▁LUFTON
- ▁BLADE
- ▁CRASH
- ▁SLAUGHTER
- ▁ANNUAL
- ▁CONFERENCE
- ▁TWIG
- ▁REASSUR
- ▁UNIQUE
- ▁WRATH
- ▁CRADLE
- ▁HULLO
- ▁LIQUID
- ▁MIRTH
- ▁EXPERT
- ▁HARVEY
- ▁RESTORATION
- ▁PRETTI
- ▁APOLOGY
- ▁SLAIN
- ▁BARBER
- ▁UPROAR
- ▁SCANT
- ▁BADGER
- ▁GROCER
- ▁ACRES
- ▁BRIDLE
- ▁SPECIFI
- ▁TANGLE
- ▁FERTIL
- ▁PATRON
- WIXT
- LAMOUR
- ▁DARN
- ▁POPE
- ▁PERCEIV
- ▁CONCLUDE
- ▁SIMPL
- ▁GUILT
- ▁CARRIE
- EFFICIENT
- SGIVING
- ▁APPOINTMENT
- ▁APPRECIATION
- ▁CARTRIDGE
- ▁CHALLENGE
- ▁CRAYFISH
- ▁CRIMSON
- ▁CUCUMETTO
- ▁ENERGETIC
- ▁EPOCH
- ▁EXAMINING
- ▁EXTENSIVE
- ▁EXTINGUISH
- ▁GLOODY
- ▁INSIGNIFICANT
- ▁LANDLORD
- ▁LANGUID
- ▁LEGISLATURE
- ▁MAJESTIC
- ▁PACIFIC
- ▁PASTRINI
- ▁PHRONSIE
- ▁RECONCIL
- ▁SIMULTANEOUS
- ▁SKELETON
- ▁SKETCH
- ▁TRANSFORM
- ▁UNJUST
- ▁VEXED
- ▁ASYLUM
- ▁CLUSTER
- ▁ERRAND
- ▁EXPEND
- ▁NEGATIVE
- ▁NORHALA
- ▁SCANDAL
- ▁STIMULAT
- ▁SWEAT
- ▁COMPOUND
- ▁DECEMBER
- ▁EXPAND
- ▁PROLONG
- ▁PURITAN
- ▁CONQUEST
- ▁MAGUA
- ▁SANCHO
- ▁TRENCH
- ▁ENTITLE
- ▁PEPPER
- ▁DISASTER
- ▁REGAIN
- ▁SHREWD
- ▁SULLEN
- ▁CLAVIER
- ▁COLOSS
- ▁SHILLING
- ▁ETHEL
- ▁MYSTERIES
- ▁BULK
- ▁GRANDEUR
- ▁AGNES
- ▁CONVERT
- ▁WRIST
- ▁GLID
- ▁TERRACE
- ▁SONYA
- ▁DANTES
- ▁MOULD
- ▁MAGNET
- ▁PLOT
- RANK
- ▁CAVIT
- ▁SUBSID
- ▁SLAP
- TURNED
- ▁THREAT
- BREAK
- ▁ANCESTORS
- ▁ANTICIPATED
- ▁APPLAUSE
- ▁ASSAULT
- ▁ATTORNEY
- ▁AUTOMATIC
- ▁CARAVAN
- ▁CATASTROPHE
- ▁CAVALCANTI
- ▁CROMWELL
- ▁ENVOY
- ▁EXHAUSTION
- ▁FIEND
- ▁GENEROSITY
- ▁GIMBLET
- ▁HARDQUANONNE
- ▁HOUARN
- ▁INJURY
- ▁MACKINSON
- ▁OGLETHORPE
- ▁PETTICOAT
- ▁RASPBERR
- ▁REHNHJELM
- ▁REJOICING
- ▁REMNANT
- ▁SCOTLAND
- ▁SHRINK
- ▁STANDPOINT
- ▁TESTIMONY
- ▁THEREAFTER
- ▁THIRTIETH
- ▁TWENTIETH
- ▁TYRANT
- ▁VENTNOR
- ▁VETERAN
- ▁WHITTAKER
- ▁ZVERKOV
- ▁ARCHITECTUR
- ▁BLUNDER
- ▁DENSHER
- ▁FORTNIGHT
- ▁JUDITH
- ▁MARIANNE
- ▁MEMORABLE
- ▁REFINED
- ▁REVOLV
- ▁UNDERTAKING
- ▁CLUMP
- ▁GRUMBLE
- ▁SYMPATHI
- ▁TICKET
- ▁TWITCH
- ▁EDITION
- ▁FALANDER
- ▁CARTHAGE
- ▁ORLEANS
- ▁POSSUM
- ▁SWITCH
- ▁CLUNG
- ▁CARDINAL
- ▁GNAW
- ▁LOCATED
- ▁HARROW
- ▁RASH
- ▁SIEGE
- ▁LOAF
- ▁BRUISE
- ▁REGULAT
- ▁RESORT
- ▁SARAH
- ▁LEVIN
- ▁NAVY
- ▁MOOSE
- ▁STOOL
- ▁CHANCELLOR
- ▁INGENIOUS
- ▁CHALK
- ▁PRETENCE
- ▁REPAY
- ▁ROAST
- ▁PLUTO
- ▁BAFFL
- ▁STUMBL
- ▁SPHERE
- ▁PLEDGE
- ▁SPRAWL
- ▁WRAP
- ▁FRINGE
- ▁DREAR
- ARRINGTON
- ▁FEDERA
- KEEPER
- ▁PHYSIC
- ▁ADVENT
- HUMAN
- OLOGIST
- ▁ALEXANDR
- ▁APPARITION
- ▁BARTHOLEMY
- ▁CITOYEN
- ▁CLIMATE
- ▁CONTEMPORAR
- ▁DESOLATE
- ▁DISCONTENT
- ▁ELEPHANT
- ▁FERNANDO
- ▁FERRALTI
- ▁FOLIAGE
- ▁FUGITIVE
- ▁GAMBLING
- ▁INVOLUNTARILY
- ▁LABYRINTH
- ▁LEGITIMATE
- ▁MILLIONAIRE
- ▁PERCEPTION
- ▁PROPRIETY
- ▁REBELLION
- ▁REFRAIN
- ▁RUGGLES
- ▁SCRIPTURE
- ▁SPLENDOR
- ▁SQUADRON
- ▁STRICKEN
- ▁SWARM
- ▁THEODORA
- ▁TOMORROW
- ▁VELVET
- ▁WOLVES
- ▁DISREGARD
- ▁GLIMMER
- ▁SHROUD
- ▁TWINKLING
- ▁UNEQUAL
- ▁CHANNING
- ▁CLUMS
- ▁ENIGMA
- ▁NAVIGAT
- ▁TARKAS
- ▁TEMPERATURE
- ▁DIVISION
- ▁GRATIFICATION
- ▁MONUMENT
- ▁SQUEAK
- ▁KAVIN
- ▁INTERPOSE
- ▁THORNTON
- ▁SOLUTION
- ▁STREAK
- ▁SHRILL
- ▁APRON
- ▁PITEOUS
- ▁HAUGHTY
- ▁RECKLESS
- ▁EMPTI
- ▁WADMAN
- ▁BONNET
- ▁MARTHA
- ▁DUMB
- ▁SHATTER
- ▁ACUTE
- ▁BRINK
- ▁CAPRICE
- ▁HURON
- ▁INFERN
- ▁FOWL
- ▁ENRAGE
- ▁ADORN
- ▁CRUIS
- ▁PROBABILIT
- ▁EXPIR
- ▁IMPETU
- ▁OVERHEAR
- BURTON
- ▁TRANSLAT
- ▁ENGAGE
- ▁CONVINCE
- ▁ABNORMAL
- ▁GESTICULAT
- ▁ABOMINABL
- ▁ADVERSARY
- ▁ADVERTISER
- ▁ADVERTISING
- ▁ANNIHILAT
- ▁ARTILLERY
- ▁CATHEDRAL
- ▁COMPETITOR
- ▁COULSON
- ▁CREVICE
- ▁CUSHION
- ▁DEBRAY
- ▁DEJECT
- ▁DIETRICH
- ▁DISADVANTAGE
- ▁ELLISON
- ▁EMPHASIS
- ▁EXCURSION
- ▁FANTASTIC
- ▁HYPOTHES
- ▁INCONVENIENCE
- ▁INDESCRIBABLE
- ▁INDUSTRI
- ▁INVALID
- ▁MERCILESS
- ▁MESOPOTAMIA
- ▁MOSQUITO
- ▁NARRATIVE
- ▁NOWADAYS
- ▁OPPORTUNITIES
- ▁PROMISING
- ▁RECTANGLE
- ▁REMONSTRANCE
- ▁RESTAURANT
- ▁RIBBON
- ▁SCIENTIST
- ▁SHALMANESER
- ▁SKULL
- ▁SPRUCE
- ▁SUBSTANTIAL
- ▁SYMBOL
- ▁TEAPOT
- ▁TERRITORY
- ▁TRAFFIC
- ▁TREASON
- ▁TRUMPET
- ▁TYRANN
- ▁UNANIMOUS
- ▁UNAWARE
- ▁VICINITY
- ▁WREATH
- ▁ZADIG
- ▁CHATEAU
- ▁CONFRONT
- ▁DUCHESS
- ▁EMBODI
- ▁FEMININ
- ▁FURNACE
- ▁MONTONI
- ▁RENOWN
- ▁SMASH
- ▁HARVARD
- ▁NEWBERRY
- ▁PERFUME
- ▁SIGNATURE
- ▁SPLASH
- ▁SUPPOSITION
- ▁HARBOUR
- ▁ASSURANCE
- ▁BRISTOL
- ▁BUCKINGHAM
- ▁DUDLEY
- ▁INTENSITY
- ▁CHOPIN
- ▁ENLIST
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ncats/EpiExtract4GARD-v2
|
ncats
| 2022-02-16T00:08:16Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ncats",
"en",
"dataset:ncats/EpiSet4NER",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
widget:
- text: "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
example_title: "Named Entity Recognition Ex. 1"
- text: "A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births)"
example_title: "Named Entity Recognition Ex. 2"
- text: "A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence."
example_title: "Named Entity Recognition Ex. 3"
tags:
- token-classification
- ncats
model-index:
- name: EpiExtract4GARD-v2
results:
- task:
name: NER
type: token-classification
metrics:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
datasets:
- ncats/EpiSet4NER
license: other
---
## DOCUMENTATION UPDATES IN PROGRESS
## Model description
**EpiExtract4GARD-v2** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on EpiSet4NER-v2 for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
+More | Description pending
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Recent techniques in numeracy could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time.
<!--- Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). --->
<!--- THIS IS NOT THE UPDATED RESULTS --->
<!--- ## Hold-out validation results --->
<!--- metric| entity-level result --->
<!--- -|- --->
<!--- f1 | 83.8 --->
<!--- precision | 83.2 --->
<!--- recall | 84.5 --->
<!--- ## Test results --->
<!--- | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | --->
<!--- |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| --->
<!--- | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | --->
<!--- | | | Location | 0.661 | 0.696 | 0.678 | --->
<!--- | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | --->
<!--- | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | --->
<!--- | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | --->
<!--- | | | Location | 0.949 | 0.742 | 0.833 | --->
<!--- | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | --->
<!--- | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | --->
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model.
|
explosion/en_healthsea
|
explosion
| 2022-02-15T23:40:53Z | 14 | 5 |
spacy
|
[
"spacy",
"token-classification",
"text-classification",
"en",
"model-index",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
- text-classification
language:
- en
model-index:
- name: en_healthsea
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 80.77
- name: NER Recall
type: recall
value: 79.92
- name: NER F Score
type: f_score
value: 80.34
---
# Welcome to Healthsea ✨
Create better access to health with machine learning and natural language processing. This is the trained healthsea pipeline for analyzing user reviews to supplements by extracting their effects on health. This pipeline features a trained NER model and a custom Text Classification model with Clause Segmentation and Blinding capabilities.
> Read more in the [blog post](https://explosion.ai/blog/healthsea) and visit the [healthsea repository](https://github.com/explosion/healthsea) for all training workflows, custom components and training data.
| Feature | Description |
| --- | --- |
| **Name** | `en_healthsea` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.0,<3.3.0` |
| **Default Pipeline** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Components** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Vectors** | 684830 keys, 684830 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | MIT |
| **Author** | [Explosion](explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BENEFIT`, `CONDITION` |
| **`clausecat`** | `POSITIVE`, `NEUTRAL`, `NEGATIVE`, `ANAMNESIS` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 80.34 |
| `ENTS_P` | 80.77 |
| `ENTS_R` | 79.92 |
| `CATS_SCORE` | 74.87 |
| `CATS_MICRO_P` | 82.17 |
| `CATS_MICRO_R` | 80.85 |
| `CATS_MICRO_F` | 81.51 |
| `CATS_MACRO_P` | 78.01 |
| `CATS_MACRO_R` | 72.41 |
| `CATS_MACRO_F` | 74.87 |
| `CATS_MACRO_AUC` | 92.76 |
| `CATS_LOSS` | 297.22 |
|
premrawat/en_model_ner_skills
|
premrawat
| 2022-02-15T19:50:15Z | 6 | 4 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_model_ner_skills
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3125
- name: NER Recall
type: recall
value: 0.243902439
- name: NER F Score
type: f_score
value: 0.2739726027
---
| Feature | Description |
| --- | --- |
| **Name** | `en_model_ner_skills` |
| **Version** | `0.0.2` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 27.40 |
| `ENTS_P` | 31.25 |
| `ENTS_R` | 24.39 |
| `TOK2VEC_LOSS` | 129837.25 |
| `NER_LOSS` | 1056832.41 |
|
AI-Nordics/bert-large-swedish-cased
|
AI-Nordics
| 2022-02-15T16:52:53Z | 162 | 11 |
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: sv
---
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 340M |
| \\(n_{layers}\\) | 24 |
| \\(n_{heads}\\) | 16 |
| \\(n_{ctx}\\) | 1024 |
| \\(n_{vocab}\\) | 30592 |
## Training data
The model is pretrained on a Swedish text corpus of around 85 GB from a variety of sources as shown below.
<figure>
| Dataset | Genre | Size(GB)|
|----------------------|------|------|
| Anföranden | Politics |0.9|
|DCEP|Politics|0.6|
|DGT|Politics|0.7|
|Fass|Medical|0.6|
|Författningar|Legal|0.1|
|Web data|Misc|45.0|
|JRC|Legal|0.4|
|Litteraturbanken|Books|0.3O|
|SCAR|Misc|28.0|
|SOU|Politics|5.3|
|Subtitles|Drama|1.3|
|Wikipedia|Facts|1.8|
## Intended uses & limitations
The raw model can be used for the usual tasks of masked language modeling or next sentence prediction. It is also often fine-tuned on a downstream task to improve its performance in a specific domain/task.
<br>
<br>
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("AI-Nordics/bert-large-swedish-cased")
model = AutoModelForMaskedLM.from_pretrained("AI-Nordics/bert-large-swedish-cased")
|
Xibanya/sunset_city
|
Xibanya
| 2022-02-15T16:31:37Z | 0 | 3 | null |
[
"PyTorch",
"Transformers",
"text-to-image",
"ru",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-to-image
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
language:
- ru
- en
pipeline_tag: text-to-image
tags:
- PyTorch
- Transformers
---
# Sunset Cities
This is the [Malevich](https://huggingface.co/sberbank-ai/rudalle-Malevich) ruDALL-E model finetuned on anime screenshots of big cities at sunset.
<img style="text-align:center; display:block;" src="https://huggingface.co/Xibanya/sunset_city/resolve/main/citysunset.png" width="256">
### installation
```
pip install rudalle
```
### How to use
Basic implementation to get a list of image data objects.
```python
from translate import Translator
from rudalle import get_rudalle_model, get_tokenizer, get_vae
from rudalle.pipelines import generate_images
model = get_rudalle_model('Malevich', pretrained=True, fp16=True, device='cuda')
model.load_state_dict(torch.load(CHECKPOINT_PATH))
vae = get_vae().to('cuda')
tokenizer = get_tokenizer()
input_text = Translator(to_lang='ru').translate('city at sunset')
images, _ = generate_images(
text=input_text,
tokenizer=tokenizer, dalle=model, vae=vae,
images_num=1,
top_k=2048,
top_p=0.95,
temperature=1.0
)
```
the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
```
|
xxr/bert-base-uncased-issues-128
|
xxr
| 2022-02-15T14:09:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-issues-128
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9845 | 1.0 | 1163 | 1.6403 |
| 1.5695 | 2.0 | 2326 | 1.4212 |
| 1.4221 | 3.0 | 3489 | 1.3714 |
| 1.3302 | 4.0 | 4652 | 1.3592 |
| 1.2734 | 5.0 | 5815 | 1.2781 |
| 1.2143 | 6.0 | 6978 | 1.2286 |
| 1.1704 | 7.0 | 8141 | 1.2492 |
| 1.1261 | 8.0 | 9304 | 1.2044 |
| 1.0812 | 9.0 | 10467 | 1.1878 |
| 1.0657 | 10.0 | 11630 | 1.2177 |
| 1.0319 | 11.0 | 12793 | 1.1428 |
| 1.0063 | 12.0 | 13956 | 1.0910 |
| 0.9731 | 13.0 | 15119 | 1.1111 |
| 0.9674 | 14.0 | 16282 | 1.1699 |
| 0.9391 | 15.0 | 17445 | 1.0805 |
| 0.9381 | 16.0 | 18608 | 1.2109 |
### Framework versions
- Transformers 4.8.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/finetuned-token-argumentative
|
ali2066
| 2022-02-15T13:46:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned-token-argumentative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-token-argumentative
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1573
- Precision: 0.3777
- Recall: 0.3919
- F1: 0.3847
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.3241 | 0.1109 | 0.2178 | 0.1470 | 0.8488 |
| No log | 2.0 | 150 | 0.3145 | 0.1615 | 0.2462 | 0.1950 | 0.8606 |
| No log | 3.0 | 225 | 0.3035 | 0.1913 | 0.3258 | 0.2411 | 0.8590 |
| No log | 4.0 | 300 | 0.3080 | 0.2199 | 0.3220 | 0.2613 | 0.8612 |
| No log | 5.0 | 375 | 0.3038 | 0.2209 | 0.3277 | 0.2639 | 0.8630 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_10k
|
joe5campbell
| 2022-02-15T12:42:41Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_10k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_10k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3891
- Train Accuracy: 0.8273
- Validation Loss: 0.4749
- Validation Accuracy: 0.8073
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3891 | 0.8273 | 0.4749 | 0.8073 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
CLAck/vi-en
|
CLAck
| 2022-02-15T11:33:16Z | 47 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- vi
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/vi-en")
tokenizer = AutoTokenizer.from_pretrained("CLAck/vi-en")
sentence = your_vietnamese_sentence
# This token is needed to identify the source language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 21.3180 |
| 2.0 | 26.8012 |
| 3.0 | 29.3578 |
| 4.0 | 31.5178 |
| 5.0 | 32.8740 |
|
CLAck/en-vi
|
CLAck
| 2022-02-15T11:28:50Z | 79 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- vi
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This is a finetuning of a MarianMT pretrained on English-Chinese. The target language pair is English-Vietnamese.
The first phase of training (mixed) is performed on a dataset containing both English-Chinese and English-Vietnamese sentences.
The second phase of training (pure) is performed on a dataset containing only English-Vietnamese sentences.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/en-vi")
tokenizer = AutoTokenizer.from_pretrained("CLAck/en-vi")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2vi>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 26.2407 |
| 2.0 | 32.6016 |
| 3.0 | 35.4060 |
| 4.0 | 36.6737 |
| 5.0 | 37.3774 |
PURE
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 37.3169 |
| 2.0 | 37.4407 |
| 3.0 | 37.6696 |
| 4.0 | 37.8765 |
| 5.0 | 38.0105 |
|
CLAck/indo-mixed
|
CLAck
| 2022-02-15T11:25:18Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 24.2579 |
| 2.0 | 30.6287 |
| 3.0 | 34.4417 |
| 4.0 | 36.2577 |
| 5.0 | 37.3488 |
FINETUNING
| Epoch | Bleu |
|:-----:|:-------:|
| 6.0 | 34.1676 |
| 7.0 | 35.2320 |
| 8.0 | 36.7110 |
| 9.0 | 37.3195 |
| 10.0 | 37.9461 |
|
CLAck/indo-pure
|
CLAck
| 2022-02-15T11:24:33Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"dataset:ALT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:04Z |
---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
Pure fine-tuning version of MarianMT en-zh on Indonesian Language
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-pure")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-pure")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 15.9336 |
| 2.0 | 28.0175 |
| 3.0 | 31.6603 |
| 4.0 | 33.9151 |
| 5.0 | 35.0472 |
| 6.0 | 35.8469 |
| 7.0 | 36.1180 |
| 8.0 | 36.6018 |
| 9.0 | 37.1973 |
| 10.0 | 37.2738 |
|
msintaha/bert-base-uncased-finetuned-copa-data-new
|
msintaha
| 2022-02-15T08:41:46Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-copa-data-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-copa-data-new
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6564 | 0.6600 |
| No log | 2.0 | 50 | 0.5995 | 0.7000 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/bert-finetuned-squad
|
jatinshah
| 2022-02-15T02:37:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
speech-seq2seq/wav2vec2-2-bert-large-no-adapter-frozen-enc
|
speech-seq2seq
| 2022-02-15T00:30:50Z | 15 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7664
- Wer: 2.0133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.171 | 0.28 | 500 | 8.6956 | 2.0055 |
| 5.307 | 0.56 | 1000 | 8.5958 | 2.0096 |
| 5.1449 | 0.84 | 1500 | 10.4208 | 2.0115 |
| 6.1351 | 1.12 | 2000 | 10.2950 | 2.0059 |
| 6.2997 | 1.4 | 2500 | 10.6762 | 2.0115 |
| 6.1394 | 1.68 | 3000 | 10.9190 | 2.0110 |
| 6.1868 | 1.96 | 3500 | 11.0166 | 2.0112 |
| 5.9647 | 2.24 | 4000 | 11.4154 | 2.0141 |
| 6.2202 | 2.52 | 4500 | 11.5837 | 2.0152 |
| 5.9612 | 2.8 | 5000 | 11.7664 | 2.0133 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
|
Arnold
| 2022-02-14T23:42:35Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hausa2-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hausa2-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2993
- Wer: 0.4826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.6e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 3
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.1549 | 12.5 | 400 | 2.7289 | 1.0 |
| 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 |
| 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 |
| 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hark99/distilbert-base-uncased-finetuned-squad
|
hark99
| 2022-02-14T23:05:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2251 | 1.0 | 5533 | 1.1707 |
| 0.9554 | 2.0 | 11066 | 1.1211 |
| 0.7645 | 3.0 | 16599 | 1.1642 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_50_Epochs
|
jfarray
| 2022-02-14T21:41:05Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NicoGrageda/wav2vec2-base-timit-demo-colab
|
NicoGrageda
| 2022-02-14T21:18:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 0.3375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 |
| 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 |
| 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 |
| 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 |
| 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 |
| 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 |
| 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs
|
jfarray
| 2022-02-14T21:06:23Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_100_Epochs
|
jfarray
| 2022-02-14T20:23:54Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:23:08Z | 6 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:22:41Z | 4 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
huggingtweets/dojacat
|
huggingtweets
| 2022-02-14T15:30:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dojacat/1644852645931/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1487993727918374915/aN2YUrbc_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jean-Emmanuel De La Martinière</div>
<div style="text-align: center; font-size: 14px;">@dojacat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jean-Emmanuel De La Martinière.
| Data | Jean-Emmanuel De La Martinière |
| --- | --- |
| Tweets downloaded | 1569 |
| Retweets | 124 |
| Short tweets | 322 |
| Tweets kept | 1123 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mc5ryte/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dojacat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3urxj6el/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dojacat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
leonadase/distilbert-base-uncased-finetuned-ner
|
leonadase
| 2022-02-14T13:51:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9210439378923027
- name: Recall
type: recall
value: 0.9356751314464705
- name: F1
type: f1
value: 0.9283018867924528
- name: Accuracy
type: accuracy
value: 0.983176322938345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud1-ner
|
akshaychaudhary
| 2022-02-14T13:30:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud1-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9714
- Recall: 0.9855
- F1: 0.9784
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 |
| No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 |
| No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc
|
ASCCCCCCCC
| 2022-02-14T08:54:32Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model_index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.9.0
- Pytorch 1.7.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-synthetic
|
sshasnain
| 2022-02-14T08:39:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-synthetic
This model is a fine-tuned version of [sshasnain/wav2vec2-xls-r-300m-bangla-command](https://huggingface.co/sshasnain/wav2vec2-xls-r-300m-bangla-command) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0254
- eval_wer: 0.4311
- eval_runtime: 2.5036
- eval_samples_per_second: 76.689
- eval_steps_per_second: 9.586
- epoch: 35.71
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
reatiny/distilbert-base-uncased-finetuned-emotion
|
reatiny
| 2022-02-14T07:44:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217811693486851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3190 | 0.901 | 0.8979 |
| 0.2497 | 2.0 | 500 | 0.2226 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.11.0
|
jatinshah/marian-finetuned-kde4-en-to-fr
|
jatinshah
| 2022-02-14T05:47:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8815
- Score: 52.2204
- Counts: [166010, 120787, 91973, 70929]
- Totals: [228361, 207343, 189354, 173335]
- Precisions: [72.69630103213771, 58.254679444205976, 48.57198686058916, 40.92018345977443]
- Bp: 0.9695
- Sys Len: 228361
- Ref Len: 235434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
fastai/fastbook_06_multicat_Biwi_Kinect_Head_Pose
|
fastai
| 2022-02-14T05:21:20Z | 6 | 2 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner 🤝!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
jfarray/Model_bert-base-multilingual-uncased_30_Epochs
|
jfarray
| 2022-02-13T23:54:47Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_1_Epochs
|
jfarray
| 2022-02-13T22:49:37Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
groar/gpt-neo-1.3B-finetuned-escape2
|
groar
| 2022-02-13T20:59:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape2
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_all-distilroberta-v1_100_Epochs
|
jfarray
| 2022-02-13T20:50:24Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_50_Epochs
|
jfarray
| 2022-02-13T20:18:37Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingartists/egor-letov
|
huggingartists
| 2022-02-13T20:16:48Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/egor-letov",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/egor-letov
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/faa3dae99bf1fe365927608fd55c745a.330x330x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Егор Летов (Egor Letov)</div>
<a href="https://genius.com/artists/egor-letov">
<div style="text-align: center; font-size: 14px;">@egor-letov</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Егор Летов (Egor Letov).
Dataset is available [here](https://huggingface.co/datasets/huggingartists/egor-letov).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/egor-letov")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1omrcegx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Егор Летов (Egor Letov)'s lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3lk60u9h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/egor-letov')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/egor-letov")
model = AutoModelWithLMHead.from_pretrained("huggingartists/egor-letov")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jfarray/Model_all-distilroberta-v1_30_Epochs
|
jfarray
| 2022-02-13T20:00:26Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_10_Epochs
|
jfarray
| 2022-02-13T19:47:38Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_1_Epochs
|
jfarray
| 2022-02-13T19:34:14Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
castorini/dkrr-dpr-nq-retriever
|
castorini
| 2022-02-13T17:46:38Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2012.04584",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini:
```
@misc{izacard2020distilling,
title={Distilling Knowledge from Reader to Retriever for Question Answering},
author={Gautier Izacard and Edouard Grave},
year={2020},
eprint={2012.04584},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
cscottp27/distilbert-base-uncased-finetuned-emotion
|
cscottp27
| 2022-02-13T13:19:16Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232542847906783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8352 | 1.0 | 250 | 0.3079 | 0.91 | 0.9086 |
| 0.247 | 2.0 | 500 | 0.2175 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
turing1729/gpt-neo-1.3B-news
|
turing1729
| 2022-02-13T10:21:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
Fine-tuned on short news articles for summarization with GPT-neo 1.3B parameters
|
timtarusov/distilbert-base-uncased-finetuned-emotion
|
timtarusov
| 2022-02-13T08:48:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9211076096482195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2274
- Accuracy: 0.921
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3319 | 0.8955 | 0.8897 |
| 0.2516 | 2.0 | 500 | 0.2274 | 0.921 | 0.9211 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mujeensung/roberta-base_mnli_bc
|
mujeensung
| 2022-02-13T05:13:00Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9583768461882739
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mnli_bc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Accuracy: 0.9584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2015 | 1.0 | 16363 | 0.1820 | 0.9470 |
| 0.1463 | 2.0 | 32726 | 0.1909 | 0.9559 |
| 0.0768 | 3.0 | 49089 | 0.2117 | 0.9585 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_50_Epochs
|
jfarray
| 2022-02-12T23:39:31Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_30_Epochs
|
jfarray
| 2022-02-12T23:02:17Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jiobiala24/wav2vec2-base-checkpoint-12
|
jiobiala24
| 2022-02-12T23:02:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-12
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-11.1](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-11.1) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0795
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2793 | 1.64 | 1000 | 0.5692 | 0.3518 |
| 0.2206 | 3.28 | 2000 | 0.6127 | 0.3460 |
| 0.1733 | 4.93 | 3000 | 0.6622 | 0.3580 |
| 0.1391 | 6.57 | 4000 | 0.6768 | 0.3519 |
| 0.1193 | 8.21 | 5000 | 0.7559 | 0.3540 |
| 0.1053 | 9.85 | 6000 | 0.7873 | 0.3562 |
| 0.093 | 11.49 | 7000 | 0.8170 | 0.3612 |
| 0.0833 | 13.14 | 8000 | 0.8682 | 0.3579 |
| 0.0753 | 14.78 | 9000 | 0.8317 | 0.3573 |
| 0.0698 | 16.42 | 10000 | 0.9213 | 0.3525 |
| 0.0623 | 18.06 | 11000 | 0.9746 | 0.3531 |
| 0.0594 | 19.7 | 12000 | 1.0027 | 0.3502 |
| 0.0538 | 21.35 | 13000 | 1.0045 | 0.3545 |
| 0.0504 | 22.99 | 14000 | 0.9821 | 0.3523 |
| 0.0461 | 24.63 | 15000 | 1.0818 | 0.3462 |
| 0.0439 | 26.27 | 16000 | 1.0995 | 0.3495 |
| 0.0421 | 27.91 | 17000 | 1.0533 | 0.3430 |
| 0.0415 | 29.56 | 18000 | 1.0795 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_10_Epochs
|
jfarray
| 2022-02-12T22:32:17Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs
|
jfarray
| 2022-02-12T22:09:20Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_50_Epochs
|
jfarray
| 2022-02-12T21:16:09Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_30_Epochs
|
jfarray
| 2022-02-12T21:00:41Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_10_Epochs
|
jfarray
| 2022-02-12T20:47:55Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_5_Epochs
|
jfarray
| 2022-02-12T20:37:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArBert/roberta-base-finetuned-ner-kmeans
|
ArBert
| 2022-02-12T16:54:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-kmeans
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.955868544600939
- name: Recall
type: recall
value: 0.9614658103513412
- name: F1
type: f1
value: 0.9586590074394953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9559
- Recall: 0.9615
- F1: 0.9587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0248 | 1.0 | 878 | 0.0609 | 0.9507 | 0.9561 | 0.9534 |
| 0.0163 | 2.0 | 1756 | 0.0640 | 0.9515 | 0.9578 | 0.9546 |
| 0.0089 | 3.0 | 2634 | 0.0592 | 0.9559 | 0.9615 | 0.9587 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_distiluse-base-multilingual-cased-v1_50_Epochs
|
jfarray
| 2022-02-12T14:26:35Z | 132 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jgammack/multi-qa-MTL-distilbert-base-uncased-40k
|
jgammack
| 2022-02-12T14:14:47Z | 144 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-MTL-distilbert-base-uncased-40k
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased-40k')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased-40k)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_30_Epochs
|
jfarray
| 2022-02-12T14:08:36Z | 142 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 30,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_10_Epochs
|
jfarray
| 2022-02-12T13:53:59Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArBert/roberta-base-finetuned-ner-agglo-twitter
|
ArBert
| 2022-02-12T11:40:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-agglo-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-agglo-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingartists/death-grips
|
huggingartists
| 2022-02-12T08:56:17Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/death-grips",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/death-grips
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/de4ca387303c4b46007ca1072c2e57d0.600x600x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Death Grips</div>
<a href="https://genius.com/artists/death-grips">
<div style="text-align: center; font-size: 14px;">@death-grips</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Death Grips.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/death-grips).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/death-grips")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2hmeenl7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Death Grips's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/226ak5bw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/death-grips')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/death-grips")
model = AutoModelWithLMHead.from_pretrained("huggingartists/death-grips")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
HHousen/household-rooms
|
HHousen
| 2022-02-12T06:21:05Z | 77 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:04Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: household-rooms
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8482142686843872
---
# household-rooms
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### bedroom

#### dining room

#### kitchen

#### living room

|
jgammack/multi-qa-distilbert-base-uncased
|
jgammack
| 2022-02-11T23:40:41Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jimypbr/bert-base-uncased-squad
|
jimypbr
| 2022-02-11T22:28:31Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# BERT-Base Uncased SQuADv1
`bert-base-uncased` trained on question answering with `squad`.
Evalulation scores:
```
***** eval metrics *****
epoch = 3.0
eval_exact_match = 80.6906
eval_f1 = 88.1129
eval_samples = 10784
```
|
speech-seq2seq/wav2vec2-2-gpt2-medium
|
speech-seq2seq
| 2022-02-11T22:26:54Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5264
- Wer: 1.7073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4032 | 0.28 | 500 | 4.6724 | 1.9406 |
| 4.6417 | 0.56 | 1000 | 4.7143 | 1.8874 |
| 4.5725 | 0.84 | 1500 | 4.6413 | 1.9451 |
| 4.0178 | 1.12 | 2000 | 4.5470 | 1.8861 |
| 3.9084 | 1.4 | 2500 | 4.4360 | 1.8881 |
| 3.9297 | 1.68 | 3000 | 4.2814 | 1.8652 |
| 3.707 | 1.96 | 3500 | 4.1035 | 1.8320 |
| 3.1373 | 2.24 | 4000 | 3.9557 | 1.7762 |
| 3.3152 | 2.52 | 4500 | 3.7737 | 1.7454 |
| 2.9501 | 2.8 | 5000 | 3.5264 | 1.7073 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/sauce__world
|
huggingtweets
| 2022-02-11T22:14:53Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/sauce__world/1644617665459/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488960307305218049/nAFuBERK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">poolboy sauce world</div>
<div style="text-align: center; font-size: 14px;">@sauce__world</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from poolboy sauce world.
| Data | poolboy sauce world |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 323 |
| Short tweets | 513 |
| Tweets kept | 2356 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20dtxww4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sauce__world's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vh9fgsnx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sauce__world')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ibombonato/swin-age-classifier
|
ibombonato
| 2022-02-11T21:42:47Z | 272 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: swin-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8174999952316284
---
# swin-age-classifier
Trained on 80 epochs -
Data from: Ai Crowd - Blitz
ai-blitz-xiii - Age Prediction
https://www.aicrowd.com/challenges/ai-blitz-xiii/problems/age-prediction/
Notebook based on HuggingPics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
microsoft/codebert-base
|
microsoft
| 2022-02-11T19:59:44Z | 574,944 | 236 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).
### Usage
Please see [the official repository](https://github.com/microsoft/CodeBERT) for scripts that support "code search" and "code-to-document generation".
### Reference
1. [CodeBERT trained with Masked LM objective](https://huggingface.co/microsoft/codebert-base-mlm) (suitable for code completion)
2. 🤗 [Hugging Face's CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1) (small size, 6 layers)
### Citation
```bibtex
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.