modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mrm8488/a2c-BreakoutNoFrameskip-v4
|
mrm8488
| 2022-02-07T20:45:11Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
#@title
---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- ATARI
- Breakout
---
# A2C Breakout (No frame skip) v4 🤖🎮
This is a pre-trained model of a A2C agent playing Breakout (NoFrameskip-v4) using the [stable-baselines3](https://github.com/DLR-RM/stable-baselines3) library.
<video loop="" autoplay="" controls="" src="https://huggingface.co/mrm8488/a2c-BreakoutNoFrameskip-v4/resolve/main/output.mp4"></video>
### Usage (with Stable-baselines3)
Using this model becomes easy when you have stable-baselines3 and huggingface_sb3 installed:
```
pip install stable-baselines3
pip install huggingface_sb3
```
Then, you can use the model like this:
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import A2C
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_atari_env
from stable_baselines3.common.env_util import make_vec_env
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="mrm8488/a2c-BreakoutNoFrameskip-v4", filename="a2c-BreakoutNoFrameskip-v4.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = make_atari_env('BreakoutNoFrameskip-v4')
eval_env = VecFrameStack(eval_env, n_stack=4)
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
### Evaluation Results
Mean_reward: mean_reward=242.40 +/- 98.97
|
LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7
|
LegolasTheElf
| 2022-02-07T19:16:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Wav2Vec2_xls_r_300m_hi_cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Wer: 0.6273
- Cer: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.6969 | 9.52 | 400 | 3.3092 | 1.0 | 0.9800 |
| 1.7721 | 19.05 | 800 | 0.7769 | 0.7045 | 0.2367 |
| 0.6384 | 28.57 | 1200 | 0.6567 | 0.6273 | 0.2093 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
elozano/tweet_offensive_eval
|
elozano
| 2022-02-07T17:59:03Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
datasets:
- tweet_eval
language: en
widget:
- text: "You're a complete idiot!"
example_title: "Offensive"
- text: "I am tired of studying for tomorrow's exam"
example_title: "Non-Offensive"
---
|
sukhendrasingh/finetuning-sentiment-model-3000-samples
|
sukhendrasingh
| 2022-02-07T17:20:03Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.879746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3323
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/cu_coquin
|
huggingtweets
| 2022-02-07T16:16:12Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cu_coquin/1644250567283/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442129295477035013/15LSPrJo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Manu’</div>
<div style="text-align: center; font-size: 14px;">@cu_coquin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Manu’.
| Data | Manu’ |
| --- | --- |
| Tweets downloaded | 1982 |
| Retweets | 63 |
| Short tweets | 291 |
| Tweets kept | 1628 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jyazmuh8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cu_coquin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29a5jk2r/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cu_coquin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shahukareem/wav2vec2-xls-r-300m-dv
|
shahukareem
| 2022-02-07T15:55:39Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 24.72
- name: Test CER
type: cer
value: 4.17
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Wer: 0.2451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9623 | 0.66 | 400 | 3.3010 | 1.0 |
| 3.2238 | 1.33 | 800 | 2.8950 | 1.0 |
| 1.1988 | 1.99 | 1200 | 0.5277 | 0.6681 |
| 0.6084 | 2.65 | 1600 | 0.4113 | 0.5831 |
| 0.4973 | 3.32 | 2000 | 0.3538 | 0.5333 |
| 0.4476 | 3.98 | 2400 | 0.3201 | 0.5081 |
| 0.3999 | 4.64 | 2800 | 0.2917 | 0.4759 |
| 0.3779 | 5.31 | 3200 | 0.2788 | 0.4672 |
| 0.3457 | 5.97 | 3600 | 0.2667 | 0.4557 |
| 0.3222 | 6.63 | 4000 | 0.2549 | 0.4452 |
| 0.3129 | 7.3 | 4400 | 0.2491 | 0.4266 |
| 0.2927 | 7.96 | 4800 | 0.2488 | 0.4246 |
| 0.2786 | 8.62 | 5200 | 0.2429 | 0.4145 |
| 0.2756 | 9.29 | 5600 | 0.2453 | 0.4150 |
| 0.258 | 9.95 | 6000 | 0.2282 | 0.4109 |
| 0.251 | 10.61 | 6400 | 0.2307 | 0.4012 |
| 0.2397 | 11.28 | 6800 | 0.2275 | 0.4 |
| 0.2312 | 11.94 | 7200 | 0.2244 | 0.3889 |
| 0.2323 | 12.6 | 7600 | 0.2247 | 0.3983 |
| 0.216 | 13.27 | 8000 | 0.2301 | 0.3863 |
| 0.2169 | 13.93 | 8400 | 0.2224 | 0.3782 |
| 0.2089 | 14.59 | 8800 | 0.2276 | 0.3771 |
| 0.2042 | 15.26 | 9200 | 0.2286 | 0.3784 |
| 0.1953 | 15.92 | 9600 | 0.2235 | 0.3822 |
| 0.1876 | 16.58 | 10000 | 0.2267 | 0.3674 |
| 0.186 | 17.25 | 10400 | 0.2295 | 0.3676 |
| 0.1847 | 17.91 | 10800 | 0.2244 | 0.3608 |
| 0.178 | 18.57 | 11200 | 0.2229 | 0.3526 |
| 0.1751 | 19.24 | 11600 | 0.2219 | 0.3483 |
| 0.17 | 19.9 | 12000 | 0.2241 | 0.3503 |
| 0.1641 | 20.56 | 12400 | 0.2187 | 0.3403 |
| 0.1629 | 21.23 | 12800 | 0.2135 | 0.3433 |
| 0.1568 | 21.89 | 13200 | 0.2117 | 0.3358 |
| 0.1585 | 22.55 | 13600 | 0.2151 | 0.3332 |
| 0.1512 | 23.22 | 14000 | 0.2097 | 0.3344 |
| 0.1427 | 23.88 | 14400 | 0.2119 | 0.3255 |
| 0.1458 | 24.54 | 14800 | 0.2209 | 0.3213 |
| 0.1413 | 25.21 | 15200 | 0.2228 | 0.3202 |
| 0.1363 | 25.87 | 15600 | 0.2071 | 0.3207 |
| 0.1302 | 26.53 | 16000 | 0.2094 | 0.3138 |
| 0.1283 | 27.2 | 16400 | 0.2193 | 0.3132 |
| 0.1278 | 27.86 | 16800 | 0.2197 | 0.3103 |
| 0.1271 | 28.52 | 17200 | 0.2133 | 0.3009 |
| 0.1243 | 29.19 | 17600 | 0.2202 | 0.3026 |
| 0.1182 | 29.85 | 18000 | 0.2092 | 0.3046 |
| 0.1171 | 30.51 | 18400 | 0.2142 | 0.2947 |
| 0.1156 | 31.18 | 18800 | 0.2219 | 0.2926 |
| 0.1129 | 31.84 | 19200 | 0.2194 | 0.2848 |
| 0.1099 | 32.5 | 19600 | 0.2218 | 0.2869 |
| 0.1045 | 33.17 | 20000 | 0.2183 | 0.2803 |
| 0.1057 | 33.83 | 20400 | 0.2242 | 0.2896 |
| 0.1056 | 34.49 | 20800 | 0.2189 | 0.2838 |
| 0.1039 | 35.16 | 21200 | 0.2256 | 0.2819 |
| 0.1007 | 35.82 | 21600 | 0.2196 | 0.2743 |
| 0.1012 | 36.48 | 22000 | 0.2218 | 0.2752 |
| 0.098 | 37.15 | 22400 | 0.2181 | 0.2721 |
| 0.0963 | 37.81 | 22800 | 0.2162 | 0.2691 |
| 0.0943 | 38.47 | 23200 | 0.2148 | 0.2686 |
| 0.0959 | 39.14 | 23600 | 0.2194 | 0.2658 |
| 0.0904 | 39.8 | 24000 | 0.2170 | 0.2641 |
| 0.0898 | 40.46 | 24400 | 0.2129 | 0.2585 |
| 0.0886 | 41.13 | 24800 | 0.2199 | 0.2606 |
| 0.088 | 41.79 | 25200 | 0.2155 | 0.2595 |
| 0.0863 | 42.45 | 25600 | 0.2169 | 0.2564 |
| 0.0876 | 43.12 | 26000 | 0.2178 | 0.2529 |
| 0.0827 | 43.78 | 26400 | 0.2171 | 0.2559 |
| 0.087 | 44.44 | 26800 | 0.2192 | 0.2530 |
| 0.0818 | 45.11 | 27200 | 0.2180 | 0.2496 |
| 0.0811 | 45.77 | 27600 | 0.2207 | 0.2502 |
| 0.0828 | 46.43 | 28000 | 0.2186 | 0.2502 |
| 0.0796 | 47.1 | 28400 | 0.2203 | 0.2468 |
| 0.0804 | 47.76 | 28800 | 0.2201 | 0.2453 |
| 0.0791 | 48.42 | 29200 | 0.2204 | 0.2477 |
| 0.0777 | 49.09 | 29600 | 0.2197 | 0.2466 |
| 0.0775 | 49.75 | 30000 | 0.2206 | 0.2451 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ahmedrachid/FinancialBERT
|
ahmedrachid
| 2022-02-07T15:00:03Z | 178 | 27 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
tags:
- fill-mask
---
**FinancialBERT** is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on `FinancialBERT` can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
ahmedrachid/FinancialBERT-Sentiment-Analysis
|
ahmedrachid
| 2022-02-07T14:58:57Z | 45,019 | 86 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"dataset:financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- financial-sentiment-analysis
- sentiment-analysis
datasets:
- financial_phrasebank
widget:
- text: Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.
- text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.
- text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.
---
### FinancialBERT for Sentiment Analysis
[*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.
More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
### Training data
FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
### Fine-tuning hyper-parameters
- learning_rate = 2e-5
- batch_size = 32
- max_seq_length = 512
- num_train_epochs = 5
### Evaluation metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
| sentiment | precision | recall | f1-score | support |
| ------------- |:-------------:|:-------------:|:-------------:| -----:|
| negative | 0.96 | 0.97 | 0.97 | 58 |
| neutral | 0.98 | 0.99 | 0.98 | 279 |
| positive | 0.98 | 0.97 | 0.97 | 148 |
| macro avg | 0.97 | 0.98 | 0.98 | 485 |
| weighted avg | 0.98 | 0.98 | 0.98 | 485 |
### How to use
The model can be used thanks to Transformers pipeline for sentiment analysis.
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3)
tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.",
"Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.",
"Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.",
]
results = nlp(sentences)
print(results)
[{'label': 'positive', 'score': 0.9998133778572083},
{'label': 'neutral', 'score': 0.9997822642326355},
{'label': 'negative', 'score': 0.9877365231513977}]
```
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
lgris/base_10k_8khz_pt
|
lgris
| 2022-02-07T11:53:39Z | 452 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
license: apache-2.0
---
# Wav2vec 2.0 for Portuguese in 8kHz
This is a fine-tuned model from [facebook/wav2vec2-base-10k-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli)
Datasets used to fine-tune the model:
CETUC: contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the CETEN-Folha corpus.
Common Voice 7.0: is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the oficial site.
Lapsbm: "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
Multilingual Librispeech (MLS): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like LibriVox. The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese used in this work (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
Multilingual TEDx: a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
Sidney (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
VoxForge: is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz
VoxPopuli
|
victen/distilbert-base-uncased-finetuned-emotion
|
victen
| 2022-02-07T10:42:22Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9236951195245434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2265
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8243 | 1.0 | 250 | 0.3199 | 0.906 | 0.9025 |
| 0.2484 | 2.0 | 500 | 0.2265 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
|
deepdoctection
| 2022-02-07T10:33:04Z | 0 | 0 | null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- Tensorflow
license: apache-2.0
datasets:
- Pubtabnet
---
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc).
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc
|
deepdoctection
| 2022-02-07T10:24:03Z | 0 | 0 | null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- Tensorflow
license: apache-2.0
datasets:
- Pubtabnet
---
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
willemjan/eng
|
willemjan
| 2022-02-07T09:23:20Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-sa-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-sa-3.0
---
|
Llamacha/QuBERTa
|
Llamacha
| 2022-02-07T09:14:51Z | 52 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Llamacha",
"qu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- qu
tags:
- Llamacha
---
# QuBERTa
QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).
El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.
## Usabilidad
Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `.
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./QuBERTa",
tokenizer="./QuBERTa"
)
```
Se hace la prueba, la cual esta en fases de mejoras.
```python
fill_mask("allinllachu <mask> allinlla huk wasipita.")
```
[{'score': 0.23992203176021576,
'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',
'token': 334,
'token_str': ' nisqaqa'},
{'score': 0.061005301773548126,
'sequence': 'allinllachu, allinlla huk wasipita.',
'token': 16,
'token_str': ','},
{'score': 0.028720015659928322,
'sequence': "allinllachu' allinlla huk wasipita.",
'token': 11,
'token_str': "'"},
{'score': 0.012927944771945477,
'sequence': 'allinllachu kay allinlla huk wasipita.',
'token': 377,
'token_str': ' kay'},
{'score': 0.01230092253535986,
'sequence': 'allinllachu. allinlla huk wasipita.',
'token': 18,
'token_str': '.'}]
|
willemjan/indo1
|
willemjan
| 2022-02-07T09:14:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
willemjan/nl2
|
willemjan
| 2022-02-07T08:52:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
willemjan/nl1
|
willemjan
| 2022-02-07T08:44:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:cc-by-nc-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: cc-by-nc-3.0
---
|
aidj/distilbert-base-uncased-finetuned-ner
|
aidj
| 2022-02-07T07:19:58Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9260322366968425
- name: Recall
type: recall
value: 0.9383599955252265
- name: F1
type: f1
value: 0.9321553592265377
- name: Accuracy
type: accuracy
value: 0.9834146186474335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9260
- Recall: 0.9384
- F1: 0.9322
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2545 | 1.0 | 878 | 0.0711 | 0.9096 | 0.9214 | 0.9154 | 0.9800 |
| 0.0555 | 2.0 | 1756 | 0.0593 | 0.9185 | 0.9356 | 0.9270 | 0.9827 |
| 0.0297 | 3.0 | 2634 | 0.0607 | 0.9260 | 0.9384 | 0.9322 | 0.9834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bespin-global/klue-sentence-roberta-base
|
bespin-global
| 2022-02-07T07:14:05Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:klue",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- klue
license: cc-by-nc-4.0
---
# bespin-global/klue-sentence-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bespin-global/klue-sentence-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bespin-global/klue-sentence-roberta-base')
model = AutoModel.from_pretrained('bespin-global/klue-sentence-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bespin-global/klue-sentence-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 365 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 219,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
|
leeyujin/distilbert-base-uncased-finetuned-cola
|
leeyujin
| 2022-02-07T07:08:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5062132225102124
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
- Matthews Correlation: 0.5062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4851 | 0.4301 |
| No log | 2.0 | 268 | 0.4619 | 0.4891 |
| No log | 3.0 | 402 | 0.5447 | 0.4965 |
| 0.3828 | 4.0 | 536 | 0.5608 | 0.5062 |
| 0.3828 | 5.0 | 670 | 0.5702 | 0.5029 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
histinct7002/distilbert-base-uncased-finetuned-cola
|
histinct7002
| 2022-02-07T06:18:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5290966132843783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4600
- Matthews Correlation: 0.5291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.4715 | 0.4678 |
| 0.3493 | 2.0 | 1070 | 0.4600 | 0.5291 |
| 0.2393 | 3.0 | 1605 | 0.6018 | 0.5219 |
| 0.1714 | 4.0 | 2140 | 0.7228 | 0.5245 |
| 0.1289 | 5.0 | 2675 | 0.8154 | 0.5279 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.5.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
GleamEyeBeast/Mandarin
|
GleamEyeBeast
| 2022-02-07T04:25:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: Mandarin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mandarin
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ghofrani/common6
|
ghofrani
| 2022-02-07T02:29:26Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"fa",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fa
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: common6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# common6
This model is a fine-tuned version of [common6/checkpoint-3500](https://huggingface.co/common6/checkpoint-3500) on the COMMON_VOICE - FA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3706
- Wer: 0.3421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0344 | 10.0 | 500 | 0.4043 | 0.4511 |
| 0.9651 | 20.0 | 1000 | 0.3793 | 0.4159 |
| 0.9125 | 30.0 | 1500 | 0.3756 | 0.4046 |
| 0.8831 | 40.0 | 2000 | 0.3650 | 0.3876 |
| 0.8399 | 50.0 | 2500 | 0.3605 | 0.3772 |
| 0.819 | 60.0 | 3000 | 0.3622 | 0.3714 |
| 0.8029 | 70.0 | 3500 | 0.3561 | 0.3664 |
| 0.8104 | 80.0 | 4000 | 0.3595 | 0.3660 |
| 0.8118 | 90.0 | 4500 | 0.3460 | 0.3592 |
| 0.7831 | 100.0 | 5000 | 0.3566 | 0.3593 |
| 0.744 | 110.0 | 5500 | 0.3578 | 0.3535 |
| 0.7388 | 120.0 | 6000 | 0.3538 | 0.3520 |
| 0.714 | 130.0 | 6500 | 0.3682 | 0.3506 |
| 0.7291 | 140.0 | 7000 | 0.3625 | 0.3505 |
| 0.697 | 150.0 | 7500 | 0.3619 | 0.3479 |
| 0.6811 | 160.0 | 8000 | 0.3631 | 0.3440 |
| 0.6841 | 170.0 | 8500 | 0.3672 | 0.3460 |
| 0.6616 | 180.0 | 9000 | 0.3677 | 0.3410 |
| 0.6471 | 190.0 | 9500 | 0.3707 | 0.3420 |
| 0.6759 | 200.0 | 10000 | 0.3706 | 0.3421 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
lvargas/distilbert-base-uncased-finetuned-emotion2
|
lvargas
| 2022-02-07T01:36:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.903
- name: F1
type: f1
value: 0.9003235459489749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Accuracy: 0.903
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5960 | 0.8025 | 0.7750 |
| 0.7853 | 2.0 | 250 | 0.3623 | 0.903 | 0.9003 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
BigSalmon/Points2
|
BigSalmon
| 2022-02-07T00:27:54Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Converting Points or Headlines to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
```
Essay Intro (Sega Centers Classics): unyielding in its insistence on consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. this is a task that not even the most devoted fan could have foreseen.
***
Essay Intro (Blizzard Shows Video Games Are An Art): universally adored, video games have come to be revered not only as interactive diversions, but as artworks. a firm believer in this doctrine, blizzard actively works to further the craft of storytelling in their respective titles.
***
Essay Intro (What Happened To Linux): chancing upon a linux user is a rare occurrence in the present day. once a mainstay, the brand has come to only be seen in the hands of the most ardent of its followers.
```
|
BigSalmon/Points
|
BigSalmon
| 2022-02-07T00:27:49Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Converting Points to Paragraphs
Example Prompts:
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
|
StevenLimcorn/wav2vec2-xls-r-300m-zh-TW
|
StevenLimcorn
| 2022-02-06T21:57:14Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- zh-TW
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-TW dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1786
- Wer: 0.8594
- Cer: 0.2964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 64.6189 | 2.51 | 500 | 63.8077 | 1.0 | 1.0 |
| 8.0561 | 5.03 | 1000 | 6.8014 | 1.0 | 1.0 |
| 6.0427 | 7.54 | 1500 | 6.0745 | 1.0 | 1.0 |
| 5.9357 | 10.05 | 2000 | 5.8682 | 1.0 | 1.0 |
| 5.0489 | 12.56 | 2500 | 4.4032 | 0.9990 | 0.7750 |
| 4.6184 | 15.08 | 3000 | 3.8383 | 0.9983 | 0.6768 |
| 4.365 | 17.59 | 3500 | 3.4633 | 0.9959 | 0.6299 |
| 4.1026 | 20.1 | 4000 | 3.0732 | 0.9902 | 0.5814 |
| 3.8655 | 22.61 | 4500 | 2.7638 | 0.9868 | 0.5465 |
| 3.6991 | 25.13 | 5000 | 2.4759 | 0.9811 | 0.5088 |
| 3.4894 | 27.64 | 5500 | 2.2937 | 0.9746 | 0.4852 |
| 3.3983 | 30.15 | 6000 | 2.1684 | 0.9733 | 0.4674 |
| 3.2736 | 32.66 | 6500 | 2.0372 | 0.9659 | 0.4458 |
| 3.1884 | 35.18 | 7000 | 1.9267 | 0.9648 | 0.4329 |
| 3.1248 | 37.69 | 7500 | 1.8408 | 0.9591 | 0.4217 |
| 3.0381 | 40.2 | 8000 | 1.7531 | 0.9503 | 0.4074 |
| 2.9515 | 42.71 | 8500 | 1.6880 | 0.9459 | 0.3967 |
| 2.8704 | 45.23 | 9000 | 1.6264 | 0.9378 | 0.3884 |
| 2.8128 | 47.74 | 9500 | 1.5621 | 0.9341 | 0.3782 |
| 2.7386 | 50.25 | 10000 | 1.5011 | 0.9243 | 0.3664 |
| 2.6646 | 52.76 | 10500 | 1.4608 | 0.9192 | 0.3575 |
| 2.6072 | 55.28 | 11000 | 1.4251 | 0.9148 | 0.3501 |
| 2.569 | 57.79 | 11500 | 1.3837 | 0.9060 | 0.3462 |
| 2.5091 | 60.3 | 12000 | 1.3589 | 0.9070 | 0.3392 |
| 2.4588 | 62.81 | 12500 | 1.3261 | 0.8966 | 0.3284 |
| 2.4083 | 65.33 | 13000 | 1.3052 | 0.8982 | 0.3265 |
| 2.3787 | 67.84 | 13500 | 1.2997 | 0.8908 | 0.3243 |
| 2.3457 | 70.35 | 14000 | 1.2778 | 0.8898 | 0.3187 |
| 2.3099 | 72.86 | 14500 | 1.2661 | 0.8830 | 0.3172 |
| 2.2559 | 75.38 | 15000 | 1.2475 | 0.8851 | 0.3143 |
| 2.2264 | 77.89 | 15500 | 1.2319 | 0.8739 | 0.3085 |
| 2.196 | 80.4 | 16000 | 1.2218 | 0.8722 | 0.3049 |
| 2.1613 | 82.91 | 16500 | 1.2093 | 0.8719 | 0.3051 |
| 2.1455 | 85.43 | 17000 | 1.2055 | 0.8624 | 0.3005 |
| 2.1193 | 87.94 | 17500 | 1.1975 | 0.8600 | 0.2982 |
| 2.0911 | 90.45 | 18000 | 1.1960 | 0.8648 | 0.3003 |
| 2.0884 | 92.96 | 18500 | 1.1871 | 0.8638 | 0.2971 |
| 2.0766 | 95.48 | 19000 | 1.1814 | 0.8617 | 0.2967 |
| 2.0735 | 97.99 | 19500 | 1.1801 | 0.8621 | 0.2969 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
keras-io/char-lstm-seq2seq
|
keras-io
| 2022-02-06T18:12:35Z | 9 | 1 |
tf-keras
|
[
"tf-keras",
"seq2seq",
"translation",
"en",
"fr",
"license:cc0-1.0",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- en
- fr
tags:
- seq2seq
- translation
license:
- cc0-1.0
---
## Keras Implementation of Character-level recurrent sequence-to-sequence model
This repo contains the model and the notebook [to this Keras example on Character-level recurrent sequence-to-sequence model](https://keras.io/examples/nlp/lstm_seq2seq/).
Full credits to: [fchollet](https://twitter.com/fchollet)
## Background Information
This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.
## Limitations
It works on text of length <= 15 characters
## Parameters needed for using the model
```python
latent_dim = 256
num_encoder_tokens = 71
max_encoder_seq_length = 15
num_decoder_tokens = 92
max_decoder_seq_length = 59
```
|
dark-knight/wav2vec2-base-timit-demo-colab
|
dark-knight
| 2022-02-06T16:25:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
asalics/distilbert-base-uncased-finetuned-emotion
|
asalics
| 2022-02-06T14:29:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9244145121183605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.924
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 |
| 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Jeevesh8/feather_berts1
|
Jeevesh8
| 2022-02-06T04:52:40Z | 0 | 0 | null |
[
"arxiv:1911.02969",
"region:us"
] | null | 2022-03-02T23:29:04Z |
Second 50 [Feather BERT-s](https://arxiv.org/abs/1911.02969) compressed in groups of 10.
Clone this repository, decompress the compressed folders, and provide the paths to the Feather BERT you want to use in ``.from_pretrained()``.
For downloading first 50 Feather BERT-s, see [here](https://huggingface.co/Jeevesh8/feather_berts/).
|
am-shb/bert-base-multilingual-uncased-finetuned
|
am-shb
| 2022-02-06T00:05:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: '57463134'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57463134
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-xls-r-pa-IN-a1
|
DrishtiSharma
| 2022-02-05T21:58:25Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- pa-IN
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - PA-IN dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1508
- Wer: 0.4908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5841 | 9.26 | 500 | 3.2514 | 0.9941 |
| 0.3992 | 18.52 | 1000 | 0.8790 | 0.6107 |
| 0.2409 | 27.78 | 1500 | 1.0012 | 0.6366 |
| 0.1447 | 37.04 | 2000 | 1.0167 | 0.6276 |
| 0.1109 | 46.3 | 2500 | 1.0638 | 0.5653 |
| 0.0797 | 55.56 | 3000 | 1.1447 | 0.5715 |
| 0.0636 | 64.81 | 3500 | 1.1503 | 0.5316 |
| 0.0466 | 74.07 | 4000 | 1.2227 | 0.5386 |
| 0.0372 | 83.33 | 4500 | 1.1214 | 0.5225 |
| 0.0239 | 92.59 | 5000 | 1.1375 | 0.4998 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
pritamdeka/PubMedBert-fulltext-cord19
|
pritamdeka
| 2022-02-05T20:56:37Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:pritamdeka/cord-19-fulltext",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- pritamdeka/cord-19-fulltext
metrics:
- accuracy
model-index:
- name: pubmedbert-fulltext-cord19
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: pritamdeka/cord-19-fulltext
type: pritamdeka/cord-19-fulltext
args: fulltext
metrics:
- name: Accuracy
type: accuracy
value: 0.7175316733550737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-fulltext-cord19
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the pritamdeka/cord-19-fulltext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2667
- Accuracy: 0.7175
## Model description
The model has been trained using a maximum train sample size of 300K and evaluation size of 25K due to GPU limitations
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7985 | 0.27 | 5000 | 1.2710 | 0.7176 |
| 1.7542 | 0.53 | 10000 | 1.3359 | 0.7070 |
| 1.7462 | 0.8 | 15000 | 1.3489 | 0.7034 |
| 1.8371 | 1.07 | 20000 | 1.4361 | 0.6891 |
| 1.7102 | 1.33 | 25000 | 1.3502 | 0.7039 |
| 1.6596 | 1.6 | 30000 | 1.3341 | 0.7065 |
| 1.6265 | 1.87 | 35000 | 1.3228 | 0.7087 |
| 1.605 | 2.13 | 40000 | 1.3079 | 0.7099 |
| 1.5731 | 2.4 | 45000 | 1.2986 | 0.7121 |
| 1.5602 | 2.67 | 50000 | 1.2929 | 0.7136 |
| 1.5447 | 2.93 | 55000 | 1.2875 | 0.7143 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/bouncemanautumn
|
huggingtweets
| 2022-02-05T20:35:09Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/bouncemanautumn/1644093304436/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1466500150759763979/_SP07dAh_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">autumn wants to hold ty’s hand</div>
<div style="text-align: center; font-size: 14px;">@bouncemanautumn</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from autumn wants to hold ty’s hand.
| Data | autumn wants to hold ty’s hand |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 195 |
| Short tweets | 434 |
| Tweets kept | 2616 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/16mq5may/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bouncemanautumn's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vlqrfex/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bouncemanautumn')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sunitha/Trial_3_Results
|
sunitha
| 2022-02-05T19:27:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: Trial_3_Results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial_3_Results
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
keras-io/ctc_asr
|
keras-io
| 2022-02-05T17:54:45Z | 8 | 1 |
tf-keras
|
[
"tf-keras",
"speech recognition",
"ctc",
"license:cc0-1.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- speech recognition
- ctc
dataset:
- LJSpeech dataset
license: cc0-1.0
---
## Automatic Speech Recognition using CTC model on the 🤗Hub!
Full credits go to [Mohamed Reda Bouadjenek]() and [Ngoc Dung Huynh]().
This repository contains the model from [this notebook on Automatic Speech Recognition using CTC](https://keras.io/examples/audio/ctc_asr/).
|
transformersbook/xlm-roberta-base-finetuned-panx-de-fr
|
transformersbook
| 2022-02-05T17:08:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1616
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2855 | 1.0 | 715 | 0.1944 | 0.8178 |
| 0.1485 | 2.0 | 1430 | 0.1679 | 0.8469 |
| 0.0966 | 3.0 | 2145 | 0.1616 | 0.8590 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-de
|
transformersbook
| 2022-02-05T17:07:41Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8645910410381922
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1388
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2652 | 1.0 | 525 | 0.1602 | 0.8230 |
| 0.1314 | 2.0 | 1050 | 0.1372 | 0.8527 |
| 0.0806 | 3.0 | 1575 | 0.1388 | 0.8646 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/xlm-roberta-base-finetuned-panx-it
|
transformersbook
| 2022-02-05T17:07:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8215158924205379
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.2445
- F1: 0.8215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7594 | 1.0 | 70 | 0.3402 | 0.7467 |
| 0.2942 | 2.0 | 140 | 0.2555 | 0.7971 |
| 0.1814 | 3.0 | 210 | 0.2445 | 0.8215 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/pegasus-samsum
|
transformersbook
| 2022-02-05T17:05:28Z | 75,124 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-test
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. The model is trained in Chapter 6: Summarization in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/06_summarization.ipynb).
It achieves the following results on the evaluation set:
- Loss: 1.4875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7012 | 0.54 | 500 | 1.4875 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
transformersbook/bert-base-uncased-finetuned-clinc
|
transformersbook
| 2022-02-05T16:38:54Z | 922 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:1909.02027",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Intent Detection with BERT
This model was trained on the [CLINC150](https://arxiv.org/abs/1909.02027) dataset for customer intent detection. The dataset can be found on the [Hub](https://huggingface.co/datasets/clinc_oos). The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb).
|
transformersbook/codeparrot-small
|
transformersbook
| 2022-02-05T16:28:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# CodeParrot
CodeParrot (small) is a 110M parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
transformersbook/codeparrot
|
transformersbook
| 2022-02-05T16:27:42Z | 18 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
# CodeParrot
CodeParrot (large) is a 1.5B parameter GPT-2 model trained on the [CodeParrot Python code dataset](https://huggingface.co/datasets/transformersbook/codeparrot). The model is trained in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
|
HarrisDePerceptron/xls-r-300m-ur-cv7
|
HarrisDePerceptron
| 2022-02-05T11:21:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2924
- Wer: 0.7201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.2783 | 4.17 | 100 | 4.6409 | 1.0 |
| 3.5578 | 8.33 | 200 | 3.1649 | 1.0 |
| 3.1279 | 12.5 | 300 | 3.0335 | 1.0 |
| 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 |
| 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 |
| 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 |
| 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 |
| 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 |
| 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 |
| 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 |
| 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 |
| 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 |
| 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 |
| 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 |
| 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 |
| 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 |
| 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 |
| 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 |
| 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 |
| 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 |
| 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 |
| 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 |
| 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 |
| 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 |
| 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 |
| 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 |
| 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 |
| 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 |
| 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 |
| 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 |
| 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 |
| 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 |
| 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 |
| 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 |
| 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 |
| 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 |
| 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 |
| 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 |
| 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 |
| 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 |
| 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 |
| 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 |
| 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 |
| 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 |
| 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 |
| 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 |
| 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 |
| 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
omoekan/opus-tatoeba-eng-yor
|
omoekan
| 2022-02-05T10:15:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
## OPUS Tatoeba English-Yoruba
This model was obtained by running the script convert_marian_to_pytorch.py with the flag -m eng-yor. The original models were trained by Jörg Tiedemann using the MarianNMT library. See all available MarianMTModel models on the profile of the Helsinki NLP group.
---
- tags: translation
- source language: English
- target language: Yoruba
- dataset: opus+bt
-model: transformer-align
-pre-processing: normalization + SentencePiece (spm12k,spm12k)
-download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.zip)
-test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.test.txt)
-test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-yor/opus+bt-2021-04-10.eval.txt)
-Benchmarks
|test set|BLEU|chr-F|
|:---|:---|:---|
|Tatoeba-test.eng-yor|13.0|0.333|
---
|
ajitrajasekharan/biomedical
|
ajitrajasekharan
| 2022-02-05T08:44:05Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- {en} # Example: fr
license: mit
widget:
- text: "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]"
example_title: "Test for entity type: Disease"
- text: "Overexpression of [MASK] occurs across a wide range of cancers"
example_title: "Test for entity type: Gene"
- text: "Patients treated with [MASK] are vulnerable to infectious diseases"
example_title: "Test for entity type: Drug"
- text: "A eGFR level below [MASK] indicates chronic kidney disease"
example_title: "Test for entity type: Measure "
- text: "In the [MASK], increased daily imatinib dose induced MMR"
example_title: "Test for entity type: STUDY/TRIAL"
- text: "Paul Erdos died at [MASK]"
example_title: "Test for entity type: TIME"
inference:
parameters:
top_k: 10
tags:
- {fill-mask} # Example: audio
- exbert
---
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
bluebalam/paper-rec
|
bluebalam
| 2022-02-04T21:37:35Z | 0 | 3 | null |
[
"recsys",
"pytorch",
"sentence_transformers",
"en",
"arxiv:2109.03955",
"arxiv:1908.10084",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license: mit
tags:
- recsys
- pytorch
- sentence_transformers
#datasets:
#- {dataset_0} # Example: common_voice. Use dataset id from https://hf.co/datasets
#metrics:
#- {metric_0} # Example: wer. Use metric id from https://hf.co/metrics
---
# `paper-rec` Model Card
Last updated: 2022-02-04
## Model Details
`paper-rec` goal is to recommend users what scientific papers to read next based on their preferences. This is a test model used to explore Hugging Face Hub capabilities and identify requirements to enable support for recommendation task in the ecosystem.
### Model date
2022-02-04
### Model type
Recommender System model with support of a Language Model for feature extraction.
### Paper & samples
The overall idea for `paper-rec` test model is inspired by this work: [NU:BRIEF – A Privacy-aware Newsletter Personalization Engine for Publishers](https://arxiv.org/abs/2109.03955).
However, for `paper-rec`, we use a different language model more suitable for longer text, namely *Sentence Transformers*: [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084), in particular: [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2).
## Model Use
The intended direct users are recommender systems' practitioners and enthusiasts that would like to experiment with the task of scientific paper recommendation.
## Data, Performance, and Limitations
### Data
The data used for this model corresponds to the [RSS news feeds for arXiv updates](https://arxiv.org/help/rss) accessed on 2022-02-04. In particular to the ones related to Machine Learning and AI:
1. [Artificial Intelligence](http://arxiv.org/rss/cs.AI)
1. [Computation and Language](http://arxiv.org/rss/cs.CL)
1. [Computer Vision and Pattern Recognition](http://arxiv.org/rss/cs.CV)
1. [Information Retrieval](http://arxiv.org/rss/cs.IR)
1. [Machine Learning (cs)](http://arxiv.org/rss/cs.LG)
1. [Machine Learning (stat)](http://arxiv.org/rss/stat.ML)
### Performance
N/A
## Limitations
The model is limited to the papers fetched on 2022-02-04, that is, those papers are the only ones it can recommend.
|
banjtheman/distilbert-base-uncased-helpful-amazon
|
banjtheman
| 2022-02-04T21:22:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
## Overview
This model was trained with data from https://registry.opendata.aws/helpful-sentences-from-reviews/ to predict how "helpful" a review is.
The model was fine-tuned from the `distilbert-base-uncased` model
### Labels
LABEL_0 - Not helpful
LABEL_1 - Helpful
### How to use
The following code shows how to make a prediction with this model
```python
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TextClassificationPipeline,
)
tokenizer = AutoTokenizer.from_pretrained("banjtheman/distilbert-base-uncased-helpful-amazon")
model = AutoModelForSequenceClassification.from_pretrained(
"banjtheman/distilbert-base-uncased-helpful-amazon"
)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = pipe("This was a Christmas gift for my grandson.")
print(result)
#[{'label': 'LABEL_0', 'score': 0.998775064945221}]
# This is NOT A HELPFUL comment
```
|
tesemnikov-av/NER-RUBERT-Per-Loc-Org
|
tesemnikov-av
| 2022-02-04T19:40:56Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: "В город Сергиев Посад приехал Курт Кобейн."
---
Fine-tuning [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) model on sentences from Wiki auto annotated with PER, LOC, ORG tags [corus/WiNER](https://pypi.org/project/corus/#reference)
language: RU
NER Class:
- PER
- LOC
- ORG
license: mit
|
LenaSchmidt/distilbert-base-uncased-finetuned-squad
|
LenaSchmidt
| 2022-02-04T19:20:11Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0325 | 1.0 | 585 | 1.7520 |
| 1.609 | 2.0 | 1170 | 1.7713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mrm8488/roberta-base-bne-finetuned-sqac-retriever
|
mrm8488
| 2022-02-04T17:59:07Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 939 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 93,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
samx18/demo
|
samx18
| 2022-02-04T17:23:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Dummy
This is a dummy model for testing - do not use
|
dkurt/wav2vec2-base-ft-keyword-spotting-int8
|
dkurt
| 2022-02-04T16:40:37Z | 7 | 2 |
transformers
|
[
"transformers",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
[anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
Rolv-Arild/xls-r-300m-npsc-4
|
Rolv-Arild
| 2022-02-04T16:36:33Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Wer: 0.1697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4527 | 0.28 | 250 | 4.0144 | 1.0 |
| 3.1828 | 0.56 | 500 | 3.1369 | 1.0 |
| 2.9927 | 0.85 | 750 | 3.0183 | 1.0 |
| 2.9591 | 1.13 | 1000 | 2.9991 | 1.0 |
| 2.8989 | 1.41 | 1250 | 2.9000 | 1.0000 |
| 2.4286 | 1.69 | 1500 | 1.7688 | 0.9550 |
| 1.6765 | 1.98 | 1750 | 0.6842 | 0.4855 |
| 1.4521 | 2.26 | 2000 | 0.5096 | 0.3736 |
| 1.3589 | 2.54 | 2250 | 0.4479 | 0.3335 |
| 1.3136 | 2.82 | 2500 | 0.4056 | 0.3123 |
| 1.2856 | 3.11 | 2750 | 0.3870 | 0.2987 |
| 1.2283 | 3.39 | 3000 | 0.3646 | 0.2828 |
| 1.2053 | 3.67 | 3250 | 0.3499 | 0.2748 |
| 1.2087 | 3.95 | 3500 | 0.3345 | 0.2603 |
| 1.2002 | 4.24 | 3750 | 0.3320 | 0.2523 |
| 1.1383 | 4.52 | 4000 | 0.3117 | 0.2439 |
| 1.1364 | 4.8 | 4250 | 0.3198 | 0.2383 |
| 1.158 | 5.08 | 4500 | 0.3071 | 0.2342 |
| 1.108 | 5.37 | 4750 | 0.3011 | 0.2314 |
| 1.1025 | 5.65 | 5000 | 0.2875 | 0.2289 |
| 1.0697 | 5.93 | 5250 | 0.2926 | 0.2256 |
| 1.0904 | 6.21 | 5500 | 0.2695 | 0.2245 |
| 1.0802 | 6.5 | 5750 | 0.2602 | 0.2189 |
| 1.0882 | 6.78 | 6000 | 0.2603 | 0.2168 |
| 1.0881 | 7.06 | 6250 | 0.2540 | 0.2293 |
| 1.0378 | 7.34 | 6500 | 0.2614 | 0.2193 |
| 1.0397 | 7.63 | 6750 | 0.2707 | 0.2104 |
| 1.0296 | 7.91 | 7000 | 0.2483 | 0.2119 |
| 1.0249 | 8.19 | 7250 | 0.2483 | 0.2047 |
| 1.013 | 8.47 | 7500 | 0.2487 | 0.2042 |
| 1.0064 | 8.76 | 7750 | 0.2456 | 0.2016 |
| 1.0668 | 9.04 | 8000 | 0.2397 | 0.1995 |
| 1.0129 | 9.32 | 8250 | 0.2374 | 0.1994 |
| 1.0164 | 9.6 | 8500 | 0.2206 | 0.1992 |
| 0.975 | 9.89 | 8750 | 0.2247 | 0.1973 |
| 0.9849 | 10.17 | 9000 | 0.2325 | 0.1953 |
| 0.9826 | 10.45 | 9250 | 0.2301 | 0.1934 |
| 0.9835 | 10.73 | 9500 | 0.2192 | 0.1942 |
| 0.9676 | 11.02 | 9750 | 0.2266 | 0.1913 |
| 0.9627 | 11.3 | 10000 | 0.2193 | 0.1921 |
| 0.976 | 11.58 | 10250 | 0.2309 | 0.1882 |
| 0.969 | 11.86 | 10500 | 0.2268 | 0.1886 |
| 0.9611 | 12.15 | 10750 | 0.2322 | 0.1863 |
| 0.9397 | 12.43 | 11000 | 0.2197 | 0.1844 |
| 0.9601 | 12.71 | 11250 | 0.2211 | 0.1871 |
| 0.9718 | 12.99 | 11500 | 0.2079 | 0.1898 |
| 0.9347 | 13.28 | 11750 | 0.2054 | 0.1843 |
| 0.9377 | 13.56 | 12000 | 0.2031 | 0.1842 |
| 0.934 | 13.84 | 12250 | 0.2059 | 0.1806 |
| 0.9295 | 14.12 | 12500 | 0.2122 | 0.1861 |
| 0.935 | 14.41 | 12750 | 0.2072 | 0.1787 |
| 0.9021 | 14.69 | 13000 | 0.2105 | 0.1781 |
| 0.9193 | 14.97 | 13250 | 0.2035 | 0.1786 |
| 0.9214 | 15.25 | 13500 | 0.2035 | 0.1766 |
| 0.9048 | 15.54 | 13750 | 0.1964 | 0.1758 |
| 0.9006 | 15.82 | 14000 | 0.1984 | 0.1757 |
| 0.9027 | 16.1 | 14250 | 0.2022 | 0.1743 |
| 0.9083 | 16.38 | 14500 | 0.1969 | 0.1744 |
| 0.9761 | 16.67 | 14750 | 0.1963 | 0.1728 |
| 0.9311 | 16.95 | 15000 | 0.1960 | 0.1737 |
| 0.886 | 17.23 | 15250 | 0.1929 | 0.1726 |
| 0.8969 | 17.51 | 15500 | 0.1928 | 0.1734 |
| 0.9084 | 17.8 | 15750 | 0.1937 | 0.1713 |
| 0.8795 | 18.08 | 16000 | 0.1978 | 0.1709 |
| 0.8883 | 18.36 | 16250 | 0.1956 | 0.1703 |
| 0.8901 | 18.64 | 16500 | 0.1933 | 0.1705 |
| 0.8922 | 18.93 | 16750 | 0.1962 | 0.1711 |
| 0.8765 | 19.21 | 17000 | 0.1962 | 0.1711 |
| 0.8992 | 19.49 | 17250 | 0.1965 | 0.1703 |
| 0.8778 | 19.77 | 17500 | 0.1957 | 0.1699 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
|
abhishek/autonlp-imdb-roberta-base-3662644
|
abhishek
| 2022-02-04T14:25:35Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"unk",
"dataset:abhishek/autonlp-data-imdb-roberta-base",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- abhishek/autonlp-data-imdb-roberta-base
co2_eq_emissions: 25.894117734124272
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 3662644
- CO2 Emissions (in grams): 25.894117734124272
## Validation Metrics
- Loss: 0.20277436077594757
- Accuracy: 0.92604
- Precision: 0.9560674830864092
- Recall: 0.89312
- AUC: 0.9814625504000001
- F1: 0.9235223559581421
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/abhishek/autonlp-imdb-roberta-base-3662644
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("abhishek/autonlp-imdb-roberta-base-3662644", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Language-Media-Lab/mt5-small-ain-jpn-mt
|
Language-Media-Lab
| 2022-02-04T13:20:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Language-Media-Lab/byt5-small-ain-jpn-mt
|
Language-Media-Lab
| 2022-02-04T13:03:14Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ain",
"ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- ain
- ja
tags:
- translation
---
Byt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
Language-Media-Lab/byt5-small-jpn-ain-mt
|
Language-Media-Lab
| 2022-02-04T13:02:58Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- jpn
- ain
tags:
- translation
---
Byt5-small-jpn-ain-mt is a machine translation model pretrained with [Google's ByT5-small](https://huggingface.co/google/byt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Japanese to Ainu language.
|
huggingtweets/ir_rkp
|
huggingtweets
| 2022-02-04T12:03:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ir_rkp/1643976228944/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1432037158072856578/a_Fty68E_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Riikka Purra</div>
<div style="text-align: center; font-size: 14px;">@ir_rkp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Riikka Purra.
| Data | Riikka Purra |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 141 |
| Short tweets | 78 |
| Tweets kept | 3031 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1w0bzvgu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ir_rkp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1nj4v31w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ir_rkp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Plim/xls-r-1b-fr
|
Plim
| 2022-02-04T11:45:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2464
- Wer: 0.2220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0326 | 0.32 | 1000 | 0.3092 | 0.2718 |
| 1.0828 | 0.65 | 2000 | 0.2843 | 0.2606 |
| 1.0771 | 0.97 | 3000 | 0.2774 | 0.2488 |
| 1.0306 | 1.3 | 4000 | 0.2588 | 0.2351 |
| 1.0052 | 1.62 | 5000 | 0.2483 | 0.2284 |
| 0.9865 | 1.94 | 6000 | 0.2464 | 0.2220 |
| 0.978 | 2.27 | 7000 | 0.2514 | 0.2172 |
| 1.7438 | 2.59 | 8000 | 0.7983 | 0.5072 |
| 2.3309 | 2.92 | 9000 | 1.8917 | 0.9416 |
| 2.1834 | 3.24 | 10000 | 1.7496 | 0.9030 |
| 2.3047 | 3.56 | 11000 | 1.5377 | 0.8747 |
| 2.1378 | 3.89 | 12000 | 1.3501 | 0.7923 |
| 1.9812 | 4.21 | 13000 | 1.2662 | 0.7697 |
| 2.6855 | 4.54 | 14000 | 2.4120 | 0.9902 |
| 2.7482 | 4.86 | 15000 | 2.5341 | 0.9874 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Subhashini17/wav2vec2-large-xls-r-300m-ta-colab-new1
|
Subhashini17
| 2022-02-04T11:14:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-ta-colab-new1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab-new1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6642
- eval_wer: 0.7611
- eval_runtime: 152.4412
- eval_samples_per_second: 11.683
- eval_steps_per_second: 1.463
- epoch: 10.11
- step: 960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ai-forever/bert-base-NER-reptile-5-datasets
|
ai-forever
| 2022-02-04T10:51:07Z | 38 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"PyTorch",
"en",
"dataset:conll2003",
"dataset:wnut_17",
"dataset:jnlpba",
"dataset:conll2012",
"dataset:BTC",
"dataset:dfki-nlp/few-nerd",
"arxiv:2010.02405",
"model-index",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
inference: false
pipeline_tag: false
datasets:
- conll2003
- wnut_17
- jnlpba
- conll2012
- BTC
- dfki-nlp/few-nerd
tags:
- PyTorch
model-index:
- name: "bert-base-NER-reptile-5-datasets"
results:
- task:
name: few-shot-ner
type: named-entity-recognition
dataset:
name: few-nerd-inter
type: named-entity-recognition
metrics:
- name: 5 way 1~2 shot
type: f1
value: 56.12
- name: 5-way 5~10-shot
type: f1
value: 62.7
- name: 10-way 1~2-shot
type: f1
value: 50.3
- name: 10-way 5~10-shot
type: f1
value: 58.82
---
# BERT base uncased model pre-trained on 5 NER datasets
Model was trained by _SberIDP_. The pretraining process and technical details are described [in this article](https://habr.com/ru/company/sberbank/blog/649609/).
* Task: Named Entity Recognition
* Base model: [bert-base-uncased](https://huggingface.co/bert-base-uncased)
* Training Data is 5 datasets: [CoNLL-2003](https://aclanthology.org/W03-0419.pdf), [WNUT17](http://noisy-text.github.io/2017/emerging-rare-entities.html), [JNLPBA](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004), [CoNLL-2012 (OntoNotes)](https://aclanthology.org/W12-4501.pdf), [BTC](https://www.derczynski.com/papers/btc.pdf)
* Testing was made in Few-Shot scenario on [Few-NERD dataset](https://github.com/thunlp/Few-NERD) using the model as a backbone for [StructShot](https://arxiv.org/abs/2010.02405)
The model is pretrained for NER task using [Reptile](https://openai.com/blog/reptile/) and can be finetuned for new entities with only a small amount of samples.
|
yohida/yoshida_gpt
|
yohida
| 2022-02-04T10:13:45Z | 4 | 0 |
transformers
|
[
"transformers",
"gpt2",
"text-generation",
"ja",
"japanese",
"gpt",
"lm",
"nlp",
"dataset:cc100",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ja
thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
tags:
- ja
- japanese
- gpt
- text-generation
- lm
- nlp
license: mit
datasets:
- cc100
- wikipedia
widget:
- text: "西田幾多郎は、"
---
# japanese-gpt-1b

This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/)
# How to use the model
*NOTE:* Use `T5Tokenizer` to initiate the tokenizer.
~~~~
import torch
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("rinna/japanese-gpt-1b")
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b")
if torch.cuda.is_available():
model = model.to("cuda")
text = "西田幾多郎は、"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_length=100,
min_length=100,
do_sample=True,
top_k=500,
top_p=0.95,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
bad_word_ids=[[tokenizer.unk_token_id]]
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
# sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの
~~~~
# Model architecture
A 24-layer, 2048-hidden-size transformer-based language model.
# Training
The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols.
# Licenese
[The MIT license](https://opensource.org/licenses/MIT)
|
MaggieXM/deberta-base-finetuned-squad
|
MaggieXM
| 2022-02-04T09:41:38Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: deberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.0001
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.0 | 2 | 5.3843 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/dril-heroicvillain95
|
huggingtweets
| 2022-02-04T08:49:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1402535431523217411/h07KN7VS_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & casually Jesse</div>
<div style="text-align: center; font-size: 14px;">@dril-heroicvillain95</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & casually Jesse.
| Data | wint | casually Jesse |
| --- | --- | --- |
| Tweets downloaded | 3228 | 2663 |
| Retweets | 475 | 133 |
| Short tweets | 305 | 353 |
| Tweets kept | 2448 | 2177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3u36b2x8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-heroicvillain95's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3c8ft6vl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-heroicvillain95')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-04T06:33:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: xlnet_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
edugp/data2vec-nlp-base
|
edugp
| 2022-02-03T23:23:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"data2vec",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
model-index:
- name: data2vec-nlp-base
results: []
---
# Data2Vec NLP Base
This model was converted from `fairseq`.
The original weights can be found in https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt
Example usage:
```python
from transformers import RobertaTokenizer, Data2VecForSequenceClassification, Data2VecConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
config = Data2VecConfig.from_pretrained("edugp/data2vec-nlp-base")
model = Data2VecForSequenceClassification.from_pretrained("edugp/data2vec-nlp-base", config=config)
# Fine-tune this model
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
|
ArBert/roberta-base-finetuned-ner
|
ArBert
| 2022-02-03T16:42:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0738
- Precision: 0.9232
- Recall: 0.9437
- F1: 0.9333
- Accuracy: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1397 | 1.0 | 1368 | 0.0957 | 0.9141 | 0.9048 | 0.9094 | 0.9753 |
| 0.0793 | 2.0 | 2736 | 0.0728 | 0.9274 | 0.9324 | 0.9299 | 0.9811 |
| 0.0499 | 3.0 | 4104 | 0.0738 | 0.9232 | 0.9437 | 0.9333 | 0.9825 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tomascufaro/wav2vec2-large-xls-r-300m-spanish-small-v3
|
tomascufaro
| 2022-02-03T15:57:54Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"robust-speech-event",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- "es"
- "robust-speech-event"
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-spanish-small-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-small-v3
This model is a fine-tuned version of [jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom](https://huggingface.co/jhonparra18/wav2vec2-large-xls-r-300m-spanish-custom) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Wer: 0.1980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2372 | 0.26 | 400 | 0.3011 | 0.2660 |
| 0.3413 | 0.53 | 800 | 0.3559 | 0.3228 |
| 0.3598 | 0.79 | 1200 | 0.3753 | 0.3400 |
| 0.3529 | 1.05 | 1600 | 0.3385 | 0.2979 |
| 0.3133 | 1.32 | 2000 | 0.3559 | 0.3056 |
| 0.3158 | 1.58 | 2400 | 0.3364 | 0.2994 |
| 0.3092 | 1.85 | 2800 | 0.3210 | 0.2876 |
| 0.2919 | 2.11 | 3200 | 0.3460 | 0.3010 |
| 0.2666 | 2.37 | 3600 | 0.3543 | 0.3036 |
| 0.2819 | 2.64 | 4000 | 0.3477 | 0.2959 |
| 0.283 | 2.9 | 4400 | 0.3492 | 0.2968 |
| 0.2484 | 3.16 | 4800 | 0.3647 | 0.2993 |
| 0.2371 | 3.43 | 5200 | 0.3601 | 0.2942 |
| 0.2382 | 3.69 | 5600 | 0.3656 | 0.3019 |
| 0.2425 | 3.96 | 6000 | 0.3379 | 0.2873 |
| 0.2092 | 4.22 | 6400 | 0.3385 | 0.2736 |
| 0.2171 | 4.48 | 6800 | 0.3503 | 0.2889 |
| 0.2185 | 4.75 | 7200 | 0.3289 | 0.2727 |
| 0.2236 | 5.01 | 7600 | 0.3447 | 0.2771 |
| 0.1882 | 5.27 | 8000 | 0.3586 | 0.2860 |
| 0.1986 | 5.54 | 8400 | 0.3404 | 0.2829 |
| 0.2055 | 5.8 | 8800 | 0.3561 | 0.2869 |
| 0.196 | 6.06 | 9200 | 0.3633 | 0.2811 |
| 0.1748 | 6.33 | 9600 | 0.3703 | 0.2818 |
| 0.1758 | 6.59 | 10000 | 0.3525 | 0.2816 |
| 0.1819 | 6.86 | 10400 | 0.3581 | 0.2765 |
| 0.1715 | 7.12 | 10800 | 0.3480 | 0.2628 |
| 0.1606 | 7.38 | 11200 | 0.3490 | 0.2703 |
| 0.1632 | 7.65 | 11600 | 0.3461 | 0.2706 |
| 0.1638 | 7.91 | 12000 | 0.3458 | 0.2673 |
| 0.1552 | 8.17 | 12400 | 0.3646 | 0.2732 |
| 0.154 | 8.44 | 12800 | 0.3706 | 0.2726 |
| 0.1512 | 8.7 | 13200 | 0.3609 | 0.2683 |
| 0.149 | 8.97 | 13600 | 0.3610 | 0.2668 |
| 0.1357 | 9.23 | 14000 | 0.3693 | 0.2740 |
| 0.1375 | 9.49 | 14400 | 0.3677 | 0.2625 |
| 0.1391 | 9.76 | 14800 | 0.3795 | 0.2762 |
| 0.1378 | 10.02 | 15200 | 0.3541 | 0.2592 |
| 0.1197 | 10.28 | 15600 | 0.3562 | 0.2507 |
| 0.1259 | 10.55 | 16000 | 0.3612 | 0.2584 |
| 0.1266 | 10.81 | 16400 | 0.3470 | 0.2527 |
| 0.1199 | 11.07 | 16800 | 0.3721 | 0.2571 |
| 0.1157 | 11.34 | 17200 | 0.3734 | 0.2571 |
| 0.1107 | 11.6 | 17600 | 0.3730 | 0.2589 |
| 0.1148 | 11.87 | 18000 | 0.3648 | 0.2536 |
| 0.1095 | 12.13 | 18400 | 0.3746 | 0.2521 |
| 0.1047 | 12.39 | 18800 | 0.3566 | 0.2530 |
| 0.1043 | 12.66 | 19200 | 0.3794 | 0.2545 |
| 0.1066 | 12.92 | 19600 | 0.3548 | 0.2439 |
| 0.0974 | 13.18 | 20000 | 0.3702 | 0.2461 |
| 0.0978 | 13.45 | 20400 | 0.3721 | 0.2492 |
| 0.095 | 13.71 | 20800 | 0.3599 | 0.2467 |
| 0.0963 | 13.97 | 21200 | 0.3650 | 0.2402 |
| 0.0902 | 14.24 | 21600 | 0.3689 | 0.2459 |
| 0.0898 | 14.5 | 22000 | 0.3832 | 0.2452 |
| 0.0865 | 14.77 | 22400 | 0.3982 | 0.2436 |
| 0.0911 | 15.03 | 22800 | 0.3785 | 0.2398 |
| 0.0793 | 15.29 | 23200 | 0.3731 | 0.2396 |
| 0.0806 | 15.56 | 23600 | 0.3626 | 0.2372 |
| 0.0789 | 15.82 | 24000 | 0.3707 | 0.2356 |
| 0.0779 | 16.08 | 24400 | 0.3850 | 0.2368 |
| 0.078 | 16.35 | 24800 | 0.3831 | 0.2363 |
| 0.0732 | 16.61 | 25200 | 0.3947 | 0.2287 |
| 0.0733 | 16.88 | 25600 | 0.3928 | 0.2374 |
| 0.0721 | 17.14 | 26000 | 0.3943 | 0.2324 |
| 0.0676 | 17.4 | 26400 | 0.3793 | 0.2311 |
| 0.0682 | 17.67 | 26800 | 0.3958 | 0.2257 |
| 0.0714 | 17.93 | 27200 | 0.3890 | 0.2322 |
| 0.0673 | 18.19 | 27600 | 0.3872 | 0.2229 |
| 0.0613 | 18.46 | 28000 | 0.3828 | 0.2226 |
| 0.0621 | 18.72 | 28400 | 0.3812 | 0.2214 |
| 0.0622 | 18.98 | 28800 | 0.3919 | 0.2212 |
| 0.0576 | 19.25 | 29200 | 0.4000 | 0.2205 |
| 0.0581 | 19.51 | 29600 | 0.3953 | 0.2203 |
| 0.0573 | 19.78 | 30000 | 0.3947 | 0.2190 |
| 0.0576 | 20.04 | 30400 | 0.3909 | 0.2156 |
| 0.0551 | 20.3 | 30800 | 0.4178 | 0.2153 |
| 0.0525 | 20.57 | 31200 | 0.3935 | 0.2152 |
| 0.0522 | 20.83 | 31600 | 0.4054 | 0.2151 |
| 0.0519 | 21.09 | 32000 | 0.3877 | 0.2135 |
| 0.0479 | 21.36 | 32400 | 0.4119 | 0.2107 |
| 0.0472 | 21.62 | 32800 | 0.3967 | 0.2091 |
| 0.048 | 21.89 | 33200 | 0.3812 | 0.2057 |
| 0.0458 | 22.15 | 33600 | 0.3931 | 0.2043 |
| 0.0459 | 22.41 | 34000 | 0.3937 | 0.2049 |
| 0.0448 | 22.68 | 34400 | 0.3900 | 0.2056 |
| 0.0432 | 22.94 | 34800 | 0.4050 | 0.2049 |
| 0.0425 | 23.2 | 35200 | 0.3985 | 0.2014 |
| 0.0415 | 23.47 | 35600 | 0.3976 | 0.2013 |
| 0.0403 | 23.73 | 36000 | 0.4031 | 0.2018 |
| 0.04 | 23.99 | 36400 | 0.3996 | 0.2000 |
| 0.039 | 24.26 | 36800 | 0.3977 | 0.1993 |
| 0.0406 | 24.52 | 37200 | 0.3967 | 0.2000 |
| 0.0391 | 24.79 | 37600 | 0.3986 | 0.1980 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ArBert/albert-base-v2-finetuned-ner
|
ArBert
| 2022-02-03T14:26:33Z | 22 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: albert-base-v2-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9301181102362205
- name: Recall
type: recall
value: 0.9376033513394334
- name: F1
type: f1
value: 0.9338457315399397
- name: Accuracy
type: accuracy
value: 0.9851613086447802
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Precision: 0.9301
- Recall: 0.9376
- F1: 0.9338
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.096 | 1.0 | 1756 | 0.0752 | 0.9163 | 0.9201 | 0.9182 | 0.9811 |
| 0.0481 | 2.0 | 3512 | 0.0761 | 0.9169 | 0.9293 | 0.9231 | 0.9830 |
| 0.0251 | 3.0 | 5268 | 0.0700 | 0.9301 | 0.9376 | 0.9338 | 0.9852 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Baybars/wav2vec2-xls-r-1b-turkish
|
Baybars
| 2022-02-03T10:09:31Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [./checkpoint-10500](https://huggingface.co/./checkpoint-10500) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7540
- Wer: 0.4647
- Cer: 0.1318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.999,0.9999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 120.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:------:|:---------------:|:------:|
| 1.0779 | 4.59 | 500 | 0.2354 | 0.8260 | 0.7395 |
| 0.7573 | 9.17 | 1000 | 0.2100 | 0.7544 | 0.6960 |
| 0.8225 | 13.76 | 1500 | 0.2021 | 0.6867 | 0.6672 |
| 0.621 | 18.35 | 2000 | 0.1874 | 0.6824 | 0.6209 |
| 0.6362 | 22.94 | 2500 | 0.1904 | 0.6712 | 0.6286 |
| 0.624 | 27.52 | 3000 | 0.1820 | 0.6940 | 0.6116 |
| 0.4781 | 32.11 | 3500 | 0.1735 | 0.6966 | 0.5989 |
| 0.5685 | 36.7 | 4000 | 0.1769 | 0.6742 | 0.5971 |
| 0.4384 | 41.28 | 4500 | 0.1767 | 0.6904 | 0.5999 |
| 0.5509 | 45.87 | 5000 | 0.1692 | 0.6734 | 0.5641 |
| 0.3665 | 50.46 | 5500 | 0.1680 | 0.7018 | 0.5662 |
| 0.3914 | 55.05 | 6000 | 0.1631 | 0.7121 | 0.5552 |
| 0.2467 | 59.63 | 6500 | 0.1563 | 0.6657 | 0.5374 |
| 0.2576 | 64.22 | 7000 | 0.1554 | 0.6920 | 0.5316 |
| 0.2711 | 68.81 | 7500 | 0.1495 | 0.6900 | 0.5176 |
| 0.2626 | 73.39 | 8000 | 0.1454 | 0.6843 | 0.5043 |
| 0.1377 | 77.98 | 8500 | 0.1470 | 0.7383 | 0.5101 |
| 0.2005 | 82.57 | 9000 | 0.1430 | 0.7228 | 0.5045 |
| 0.1355 | 87.16 | 9500 | 0.1375 | 0.7231 | 0.4869 |
| 0.0431 | 91.74 | 10000 | 0.1350 | 0.7397 | 0.4749 |
| 0.0586 | 96.33 | 10500 | 0.1339 | 0.7360 | 0.4754 |
| 0.0896 | 100.92 | 11000 | 0.7187 | 0.4885 | 0.1398 |
| 0.183 | 105.5 | 11500 | 0.7310 | 0.4838 | 0.1392 |
| 0.0963 | 110.09 | 12000 | 0.7643 | 0.4759 | 0.1362 |
| 0.0437 | 114.68 | 12500 | 0.7525 | 0.4641 | 0.1328 |
| 0.1122 | 119.27 | 13000 | 0.7535 | 0.4651 | 0.1317 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
Rajan/Nepali_Word2Vec
|
Rajan
| 2022-02-03T08:32:41Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: mit
---
https://github.com/R4j4n/Nepali-Word2Vec-from-scratch
How to clone :
```
git lfs install
git clone https://huggingface.co/Rajan/Nepali_Word2Vec
```
|
versae/kenlm-5gram-ncc
|
versae
| 2022-02-03T08:16:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
Atiqah/Atiqah
|
Atiqah
| 2022-02-03T07:04:44Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: artistic-2.0
---
|
pritoms/distilroberta-base-YTTranscript23
|
pritoms
| 2022-02-03T05:52:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-YTTranscript23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-YTTranscript23
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 70 | 2.9007 |
| No log | 2.0 | 140 | 2.9651 |
| No log | 3.0 | 210 | 2.9374 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sunitha/distilbert-base-uncased-3feb-2022-finetuned-squad
|
sunitha
| 2022-02-03T05:06:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-3feb-2022-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-3feb-2022-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2276 | 1.0 | 5533 | 1.1641 |
| 0.9614 | 2.0 | 11066 | 1.1225 |
| 0.7769 | 3.0 | 16599 | 1.1470 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ayham/albert_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-02T23:15:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kmfoda/staging-pegasus-gmeetsamsum
|
kmfoda
| 2022-02-02T14:34:58Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"feature-extraction",
"summarization",
"en",
"arxiv:1912.08777",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
shaina/covid_qa_mpnet
|
shaina
| 2022-02-02T14:33:18Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
widget:
- text: "What is COVID-19?"
context: "Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first known case was identified in Wuhan, China, in December 2019.[7] The disease has since spread worldwide, leading to an ongoing pandemic."
- text: "Where was COVID-19 first discovered?"
context: "The first known infections from SARS-CoV-2 were discovered in Wuhan, China. The original source of viral transmission to humans remains unclear, as does whether the virus became pathogenic before or after the spillover event."
- text: "What is Post-COVID syndrome?"
context: "Long COVID, also known as post-COVID-19 syndrome, post-acute sequelae of COVID-19 (PASC), or chronic COVID syndrome (CCS) is a condition characterized by long-term sequelae appearing or persisting after the typical convalescence period of COVID-19. Long COVID can affect nearly every organ system, with sequelae including respiratory system disorders, nervous system and neurocognitive disorders, mental health disorders, metabolic disorders, cardiovascular disorders, gastrointestinal disorders, malaise, fatigue, musculoskeletal pain, and anemia. A wide range of symptoms are commonly reported, including fatigue, headaches, shortness of breath, anosmia (loss of smell), parosmia (distorted smell), muscle weakness, low fever and cognitive dysfunction."
---
# covid_qa_mpnet
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on our COVID-19 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2477 | 1.0 | 3895 | 0.1869 |
| 0.1838 | 2.0 | 7790 | 0.1352 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Ayham/roberta_distilgpt2_summarization_cnn_dailymail
|
Ayham
| 2022-02-02T12:46:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: roberta_distilgpt2_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_distilgpt2_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
diwank/dyda-deberta-pair
|
diwank
| 2022-02-02T10:48:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
---
# diwank/dyda-deberta-pair
Deberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the [daily-dialog dataset](https://huggingface.co/datasets/daily_dialog) ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*
## Usage
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/dyda-deberta-pair")
convert_to_label = lambda n: ["__dummy__ (0), inform (1), question (2), directive (3), commissive (4)".split(', ')[i] for i in n]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # inform (1)
```
|
mbateman/mt5-small-finetuned-amazon-en-es
|
mbateman
| 2022-02-02T10:07:07Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.3313
- Rouge2: 8.1251
- Rougel: 17.0359
- Rougelsum: 16.9503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.908 | 5.5316 | 13.4368 | 13.4302 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.247 | 8.7234 | 15.7703 | 15.6964 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.3455 | 8.2467 | 16.8631 | 16.8147 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.0978 | 7.83 | 15.8065 | 15.7725 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5531 | 8.7811 | 17.2252 | 17.2055 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2072 | 8.0951 | 17.025 | 16.9644 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.3692 | 8.1843 | 17.019 | 17.0006 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.3313 | 8.1251 | 17.0359 | 16.9503 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
beomus/layoutxlm
|
beomus
| 2022-02-02T08:21:14Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# LayoutXLM finetuned on XFUN.ja
```python
import torch
import numpy as np
from PIL import Image, ImageDraw, ImageFont
from pathlib import Path
from itertools import chain
from tqdm.notebook import tqdm
from pdf2image import convert_from_path
from transformers import LayoutXLMProcessor, LayoutLMv2ForTokenClassification
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
labels = [
'O',
'B-QUESTION',
'B-ANSWER',
'B-HEADER',
'I-ANSWER',
'I-QUESTION',
'I-HEADER'
]
id2label = {v: k for v, k in enumerate(labels)}
label2id = {k: v for v, k in enumerate(labels)}
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
def iob_to_label(label):
label = label[2:]
if not label:
return 'other'
return label
label2color = {'question':'blue', 'answer':'green', 'header':'orange', 'other':'violet'}
def infer(image, processor, model, label2color):
# Use this if you're loading images
# image = Image.open(img_path).convert("RGB")
image = image.convert("RGB") # loading PDFs
encoding = processor(image, return_offsets_mapping=True, return_tensors="pt", truncation=True, max_length=514)
offset_mapping = encoding.pop('offset_mapping')
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1).squeeze().tolist()
token_boxes = encoding.bbox.squeeze().tolist()
width, height = image.size
is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0
true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]]
true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]]
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for prediction, box in zip(true_predictions, true_boxes):
predicted_label = iob_to_label(prediction).lower()
draw.rectangle(box, outline=label2color[predicted_label])
draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font)
return image
processor = LayoutXLMProcessor.from_pretrained('beomus/layoutxlm')
model = LayoutLMv2ForTokenClassification.from_pretrained("beomus/layoutxlm")
# imgs = [img_path for img_path in Path('/your/path/imgs/').glob('*.jpg')]
imgs = [convert_from_path(img_path) for img_path in Path('/your/path/pdfs/').glob('*.pdf')]
imgs = list(chain.from_iterable(imgs))
outputs = [infer(img_path, processor, model, label2color) for img_path in tqdm(imgs)]
# type(outputs[0]) -> PIL.Image.Image
```
|
NbAiLab/wav2vec2-xlsr-300M-NPSC-OH
|
NbAiLab
| 2022-02-02T06:10:42Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NbAiLab/NPSC",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NbAiLab/NPSC
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-300M-NPSC-OH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-300M-NPSC-OH
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 16K_MP3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1692
- Wer: 0.1663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.1638 | 0.66 | 500 | 3.0686 | 1.0 |
| 2.9311 | 1.31 | 1000 | 2.9208 | 1.0 |
| 2.4175 | 1.97 | 1500 | 1.5009 | 0.9049 |
| 1.4442 | 2.63 | 2000 | 0.4426 | 0.3783 |
| 1.2624 | 3.28 | 2500 | 0.3193 | 0.2998 |
| 1.1889 | 3.94 | 3000 | 0.2867 | 0.2630 |
| 1.1315 | 4.6 | 3500 | 0.2566 | 0.2444 |
| 1.0864 | 5.26 | 4000 | 0.2368 | 0.2294 |
| 1.093 | 5.91 | 4500 | 0.2240 | 0.2151 |
| 1.0368 | 6.57 | 5000 | 0.2117 | 0.2056 |
| 1.0178 | 7.23 | 5500 | 0.2020 | 0.1954 |
| 1.0035 | 7.88 | 6000 | 0.2005 | 0.1924 |
| 0.9759 | 8.54 | 6500 | 0.1971 | 0.1863 |
| 0.9795 | 9.2 | 7000 | 0.1892 | 0.1812 |
| 0.9601 | 9.85 | 7500 | 0.1863 | 0.1795 |
| 0.9673 | 10.51 | 8000 | 0.1809 | 0.1761 |
| 0.9233 | 11.17 | 8500 | 0.1818 | 0.1755 |
| 0.9382 | 11.83 | 9000 | 0.1767 | 0.1741 |
| 0.9242 | 12.48 | 9500 | 0.1743 | 0.1703 |
| 0.9703 | 13.14 | 10000 | 0.1711 | 0.1711 |
| 0.9139 | 13.8 | 10500 | 0.1718 | 0.1672 |
| 0.9073 | 14.45 | 11000 | 0.1700 | 0.1665 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
CalvinHuang/mt5-small-finetuned-amazon-en-es
|
CalvinHuang
| 2022-02-02T03:50:37Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0393
- Rouge1: 17.2936
- Rouge2: 8.0678
- Rougel: 16.8129
- Rougelsum: 16.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 6.6665 | 1.0 | 1209 | 3.2917 | 13.912 | 5.595 | 13.2984 | 13.4171 |
| 3.8961 | 2.0 | 2418 | 3.1711 | 16.2845 | 8.6033 | 15.5509 | 15.7383 |
| 3.5801 | 3.0 | 3627 | 3.0917 | 17.316 | 8.122 | 16.697 | 16.773 |
| 3.4258 | 4.0 | 4836 | 3.0583 | 16.1347 | 7.7829 | 15.6475 | 15.7804 |
| 3.3154 | 5.0 | 6045 | 3.0573 | 17.5918 | 8.7349 | 17.0537 | 17.2216 |
| 3.2438 | 6.0 | 7254 | 3.0479 | 17.2294 | 8.0383 | 16.8141 | 16.9858 |
| 3.2024 | 7.0 | 8463 | 3.0377 | 17.2918 | 8.139 | 16.8178 | 16.9671 |
| 3.1745 | 8.0 | 9672 | 3.0393 | 17.2936 | 8.0678 | 16.8129 | 16.9991 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
BigSalmon/InfillFormalLincoln
|
BigSalmon
| 2022-02-02T03:45:03Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InfillFormalLincoln")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InfillFormalLincoln")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
infill: increasing the number of sidewalks in suburban areas will [MASK].
Translated into the Style of Abraham Lincoln: increasing the number of sidewalks in suburban areas will ( ( enhance / maximize ) community cohesion / facilitate ( communal ties / the formation of neighborhood camaraderie ) / forge neighborly relations / lend themselves to the advancement of neighborly ties / plant the seeds of community building / flower anew the bonds of friendship / invite the budding of neighborhood rapport / enrich neighborhood life ).
infill: corn fields [MASK], [MASK] visibly as one ventures beyond chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), ( manifesting themselves ) visibly as one ventures beyond chicago.
infill: the [MASK] the SAT will soon be [MASK]. [MASK] an examination undertaken on one's laptop. [MASK] will allow students to retrieve test results promptly.
Translated into the Style of Abraham Lincoln: the ( conventional form of ) the SAT will soon be ( consigned to history ). ( replacing it will be ) an examination undertaken on one's laptop. ( so doing ) will allow students to retrieve test results promptly.
infill:
```
|
huggingtweets/badbunnytwitch
|
huggingtweets
| 2022-02-02T00:35:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/badbunnytwitch/1643762099951/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313658315767910400/bCaV9qVB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BadBunny 💢</div>
<div style="text-align: center; font-size: 14px;">@badbunnytwitch</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BadBunny 💢.
| Data | BadBunny 💢 |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 177 |
| Short tweets | 1018 |
| Tweets kept | 2051 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jrtmk7ym/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @badbunnytwitch's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/r2t5349l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/r2t5349l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/badbunnytwitch')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mattmcclean/distilbert-base-uncased-finetuned-emotion
|
mattmcclean
| 2022-02-01T19:48:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9252235175634111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2173
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.825 | 1.0 | 250 | 0.2925 | 0.915 | 0.9134 |
| 0.2444 | 2.0 | 500 | 0.2173 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
naleraphael/rasr_sample
|
naleraphael
| 2022-02-01T18:18:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: rasr_sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rasr_sample
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3147
- Wer: 0.2676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3332 | 1.45 | 500 | 3.3031 | 1.0 |
| 2.9272 | 2.91 | 1000 | 2.9353 | 0.9970 |
| 2.0736 | 4.36 | 1500 | 1.1565 | 0.8714 |
| 1.7339 | 5.81 | 2000 | 0.7156 | 0.6688 |
| 1.5989 | 7.27 | 2500 | 0.5791 | 0.5519 |
| 1.4916 | 8.72 | 3000 | 0.5038 | 0.5169 |
| 1.4562 | 10.17 | 3500 | 0.4861 | 0.4805 |
| 1.3893 | 11.63 | 4000 | 0.4584 | 0.4761 |
| 1.3797 | 13.08 | 4500 | 0.4298 | 0.4686 |
| 1.3508 | 14.53 | 5000 | 0.4138 | 0.3744 |
| 1.3165 | 15.99 | 5500 | 0.4015 | 0.3578 |
| 1.281 | 17.44 | 6000 | 0.3883 | 0.3472 |
| 1.2682 | 18.89 | 6500 | 0.3904 | 0.3434 |
| 1.2477 | 20.35 | 7000 | 0.3726 | 0.3321 |
| 1.2364 | 21.8 | 7500 | 0.3685 | 0.3281 |
| 1.2041 | 23.26 | 8000 | 0.3597 | 0.3194 |
| 1.1901 | 24.71 | 8500 | 0.3542 | 0.3203 |
| 1.1903 | 26.16 | 9000 | 0.3500 | 0.3138 |
| 1.1677 | 27.61 | 9500 | 0.3458 | 0.3067 |
| 1.1718 | 29.07 | 10000 | 0.3595 | 0.3112 |
| 1.1562 | 30.52 | 10500 | 0.3433 | 0.3022 |
| 1.1392 | 31.97 | 11000 | 0.3440 | 0.2936 |
| 1.1258 | 33.43 | 11500 | 0.3396 | 0.2950 |
| 1.1067 | 34.88 | 12000 | 0.3379 | 0.2939 |
| 1.0953 | 36.34 | 12500 | 0.3370 | 0.2868 |
| 1.0835 | 37.79 | 13000 | 0.3317 | 0.2860 |
| 1.0772 | 39.24 | 13500 | 0.3302 | 0.2854 |
| 1.0853 | 40.7 | 14000 | 0.3265 | 0.2783 |
| 1.0689 | 42.15 | 14500 | 0.3306 | 0.2770 |
| 1.0394 | 43.6 | 15000 | 0.3233 | 0.2757 |
| 1.0581 | 45.06 | 15500 | 0.3199 | 0.2713 |
| 1.0362 | 46.51 | 16000 | 0.3154 | 0.2683 |
| 1.0406 | 47.96 | 16500 | 0.3176 | 0.2688 |
| 1.0082 | 49.42 | 17000 | 0.3149 | 0.2679 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
patrickvonplaten/wav2vec2-common_voice-tamil
|
patrickvonplaten
| 2022-02-01T14:17:40Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ta
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tamil
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1172
- Wer: 1.0070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.84 | 100 | 4.0148 | 1.0 |
| No log | 1.69 | 200 | 3.1738 | 1.0 |
| No log | 2.54 | 300 | 2.5980 | 1.0236 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.10.3
|
moussaKam/frugalscore_medium_roberta_bert-score
|
moussaKam
| 2022-02-01T10:51:17Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
moussaKam/frugalscore_medium_bert-base_bert-score
|
moussaKam
| 2022-02-01T10:50:43Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
moussaKam/frugalscore_small_bert-base_bert-score
|
moussaKam
| 2022-02-01T10:50:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2110.08559",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# FrugalScore
FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance
Paper: https://arxiv.org/abs/2110.08559?context=cs
Project github: https://github.com/moussaKam/FrugalScore
The pretrained checkpoints presented in the paper :
| FrugalScore | Student | Teacher | Method |
|----------------------------------------------------|-------------|----------------|------------|
| [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore |
| [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore |
| [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore |
| [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore |
| [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore |
| [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore |
| [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore |
| [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
|
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
|
MaryaAI
| 2022-02-01T08:51:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0589
- Validation Loss: 5.3227
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0589 | 5.3227 | 0 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
AndrewMcDowell/wav2vec2-xls-r-1b-arabic
|
AndrewMcDowell
| 2022-02-01T08:13:55Z | 20 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ar",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ar
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1373
- Wer: 0.8607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.2416 | 0.84 | 500 | 1.2867 | 0.8875 |
| 2.3089 | 1.67 | 1000 | 1.8336 | 0.9548 |
| 2.3614 | 2.51 | 1500 | 1.5937 | 0.9469 |
| 2.5234 | 3.35 | 2000 | 1.9765 | 0.9867 |
| 2.5373 | 4.19 | 2500 | 1.9062 | 0.9916 |
| 2.5703 | 5.03 | 3000 | 1.9772 | 0.9915 |
| 2.4656 | 5.86 | 3500 | 1.8083 | 0.9829 |
| 2.4339 | 6.7 | 4000 | 1.7548 | 0.9752 |
| 2.344 | 7.54 | 4500 | 1.6146 | 0.9638 |
| 2.2677 | 8.38 | 5000 | 1.5105 | 0.9499 |
| 2.2074 | 9.21 | 5500 | 1.4191 | 0.9357 |
| 2.3768 | 10.05 | 6000 | 1.6663 | 0.9665 |
| 2.3804 | 10.89 | 6500 | 1.6571 | 0.9720 |
| 2.3237 | 11.72 | 7000 | 1.6049 | 0.9637 |
| 2.317 | 12.56 | 7500 | 1.5875 | 0.9655 |
| 2.2988 | 13.4 | 8000 | 1.5357 | 0.9603 |
| 2.2906 | 14.24 | 8500 | 1.5637 | 0.9592 |
| 2.2848 | 15.08 | 9000 | 1.5326 | 0.9537 |
| 2.2381 | 15.91 | 9500 | 1.5631 | 0.9508 |
| 2.2072 | 16.75 | 10000 | 1.4565 | 0.9395 |
| 2.197 | 17.59 | 10500 | 1.4304 | 0.9406 |
| 2.198 | 18.43 | 11000 | 1.4230 | 0.9382 |
| 2.1668 | 19.26 | 11500 | 1.3998 | 0.9315 |
| 2.1498 | 20.1 | 12000 | 1.3920 | 0.9258 |
| 2.1244 | 20.94 | 12500 | 1.3584 | 0.9153 |
| 2.0953 | 21.78 | 13000 | 1.3274 | 0.9054 |
| 2.0762 | 22.61 | 13500 | 1.2933 | 0.9073 |
| 2.0587 | 23.45 | 14000 | 1.2516 | 0.8944 |
| 2.0363 | 24.29 | 14500 | 1.2214 | 0.8902 |
| 2.0302 | 25.13 | 15000 | 1.2087 | 0.8871 |
| 2.0071 | 25.96 | 15500 | 1.1953 | 0.8786 |
| 1.9882 | 26.8 | 16000 | 1.1738 | 0.8712 |
| 1.9772 | 27.64 | 16500 | 1.1647 | 0.8672 |
| 1.9585 | 28.48 | 17000 | 1.1459 | 0.8635 |
| 1.944 | 29.31 | 17500 | 1.1414 | 0.8616 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
mikeee/model_s
|
mikeee
| 2022-02-01T07:41:39Z | 0 | 0 |
transformers
|
[
"transformers",
"zh",
"en",
"etc",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- zh
- en
- etc
tags:
- transformers
---
|
huggingtweets/clamtime-madramami
|
huggingtweets
| 2022-02-01T07:09:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/clamtime-madramami/1643699341002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486460616927858690/H_L_HiW-_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486839044906618880/x1Q9ED9b_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">clementine!!!! & riley, twink eliminator 🐾🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@clamtime-madramami</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from clementine!!!! & riley, twink eliminator 🐾🏳️⚧️.
| Data | clementine!!!! | riley, twink eliminator 🐾🏳️⚧️ |
| --- | --- | --- |
| Tweets downloaded | 3239 | 3247 |
| Retweets | 340 | 114 |
| Short tweets | 872 | 607 |
| Tweets kept | 2027 | 2526 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lh3p7v6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clamtime-madramami's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gman3fy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/clamtime-madramami')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hady/wav2vec2-base-timit-demo-colab
|
hady
| 2022-02-01T07:01:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
BigSalmon/InformalToFormalLincoln19
|
BigSalmon
| 2022-02-01T04:56:29Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
Informal to Formal:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln19")
model = AutoModelWithLMHead.from_pretrained("BigSalmon/InformalToFormalLincoln19")
```
```
https://huggingface.co/spaces/BigSalmon/GPT2 (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2_Most_Probable (The model for this space changes over time)
```
```
https://huggingface.co/spaces/BigSalmon/GPT2Space (The model for this space changes over time)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
````
```
###
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
###
- with 2,000,000 individual articles on everything
- wikipedia is the #8 site on the world wide web
- created by anyone with access to a computer
- growing at fast rate
- proof that collaborative community-based projects are the future
Text: encompassing a staggering 2,000,000 articles on every subject conceivable, wikipedia is the 8th most visited website in the world. borne of the collective efforts of anyone with an internet connection, its contents are increasing exponentially. most compellingly, however, this effort is an affirmation that community-based initiatives is the future.
###
-
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.