modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
huggingtweets/ezeojeda_97
|
huggingtweets
| 2022-02-11T18:26:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ezeojeda_97/1644604009323/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491399079779352581/L0_MeHf1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Easy</div>
<div style="text-align: center; font-size: 14px;">@ezeojeda_97</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Easy.
| Data | Easy |
| --- | --- |
| Tweets downloaded | 348 |
| Retweets | 25 |
| Short tweets | 58 |
| Tweets kept | 265 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2mcrv516/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ezeojeda_97's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12ymakai) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12ymakai/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ezeojeda_97')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nguyenvulebinh/spoken-norm
|
nguyenvulebinh
| 2022-02-11T17:21:36Z | 7 | 5 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Transformation spoken text to written text
This model is used for formatting raw asr text output from spoken text to written text (Eg. date, number, id, ...). It also supports formatting "out of vocab" by using external vocabulary.
Some of examples:
```text
input : tám giờ chín phút ngày mười tám tháng năm năm hai nghìn không trăm hai mươi hai
output : 8h9 18/5/2022
input : mã số quy đê tê tê đê hai tám chéo hai không không ba
output : mã số qdttd28/2003
input : thể tích tám mét khối trọng lượng năm mươi ki lô gam
output : thể tích 8 m3 trọng lượng 50 kg
input : ngày hai tám tháng tư cô vít bùng phát ở sờ cốt lờn chiếm tám mươi phần trăm là biến chủng đen ta và bê ta
ex_vocab : ['scotland', 'covid', 'delta', 'beta']
output : 28/4 covid bùng phát ở scotland chiếm 80 % là biến chủng delta và beta
```
## Model architecture

# Infer model
- Play around at [Huggingface Space](https://huggingface.co/spaces/nguyenvulebinh/spoken-norm)
```python
import torch
import model_handling
from data_handling import DataCollatorForNormSeq2Seq
from model_handling import EncoderDecoderSpokenNorm
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
## Init tokenizer and model
```python
tokenizer = model_handling.init_tokenizer()
model = EncoderDecoderSpokenNorm.from_pretrained('nguyenvulebinh/spoken-norm', cache_dir=model_handling.cache_dir)
data_collator = DataCollatorForNormSeq2Seq(tokenizer)
```
## Infer sample
```python
bias_list = ['scotland', 'covid', 'delta', 'beta']
input_str = 'ngày hai tám tháng tư cô vít bùng phát ở sờ cốt lờn chiếm tám mươi phần trăm là biến chủng đen ta và bê ta'
```
```python
inputs = tokenizer([input_str])
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
if len(bias_list) > 0:
bias = data_collator.encode_list_string(bias_list)
bias_input_ids = bias['input_ids']
bias_attention_mask = bias['attention_mask']
else:
bias_input_ids = None
bias_attention_mask = None
inputs = {
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask),
"bias_input_ids": bias_input_ids,
"bias_attention_mask": bias_attention_mask,
}
```
## Format input text **with** bias phrases
```python
outputs = model.generate(**inputs, output_attentions=True, num_beams=1, num_return_sequences=1)
for output in outputs.cpu().detach().numpy().tolist():
# print('\n', tokenizer.decode(output, skip_special_tokens=True).split(), '\n')
print(tokenizer.sp_model.DecodePieces(tokenizer.decode(output, skip_special_tokens=True).split()))
```
28/4 covid bùng phát ở scotland chiếm 80 % là biến chủng delta và beta
## Format input text **without** bias phrases
```python
outputs = model.generate(**{
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask),
"bias_input_ids": None,
"bias_attention_mask": None,
}, output_attentions=True, num_beams=1, num_return_sequences=1)
for output in outputs.cpu().detach().numpy().tolist():
# print('\n', tokenizer.decode(output, skip_special_tokens=True).split(), '\n')
print(tokenizer.sp_model.DecodePieces(tokenizer.decode(output, skip_special_tokens=True).split()))
```
28/4 cô vít bùng phát ở sờ cốt lờn chiếm 80 % là biến chủng đen ta và bê ta
## Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh)
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
|
sshasnain
| 2022-02-11T13:25:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Wer: 0.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2982 | 17.86 | 500 | 2.4580 | 1.1089 |
| 0.9644 | 35.71 | 1000 | 0.1250 | 0.5156 |
| 0.1767 | 53.57 | 1500 | 0.0310 | 0.4267 |
| 0.0912 | 71.43 | 2000 | 0.0149 | 0.4178 |
| 0.0505 | 89.29 | 2500 | 0.0068 | 0.4111 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mvip/wav2vec2-large-xls-r-300m-tr
|
mvip
| 2022-02-11T10:58:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4074
- Wer: 0.4227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9399 | 4.21 | 400 | 0.7252 | 0.7387 |
| 0.4147 | 8.42 | 800 | 0.4693 | 0.5201 |
| 0.1855 | 12.63 | 1200 | 0.4584 | 0.4848 |
| 0.1256 | 16.84 | 1600 | 0.4464 | 0.4708 |
| 0.0948 | 21.05 | 2000 | 0.4261 | 0.4389 |
| 0.0714 | 25.26 | 2400 | 0.4331 | 0.4349 |
| 0.0532 | 29.47 | 2800 | 0.4074 | 0.4227 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
edbeeching/test-trainer-to-hub
|
edbeeching
| 2022-02-11T10:36:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: test-trainer-to-hub
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8455882352941176
- name: F1
type: f1
value: 0.893760539629005
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer-to-hub
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7352
- Accuracy: 0.8456
- F1: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.4489 | 0.8235 | 0.8792 |
| 0.5651 | 2.0 | 918 | 0.4885 | 0.8260 | 0.8811 |
| 0.3525 | 3.0 | 1377 | 0.7352 | 0.8456 | 0.8938 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
mbateman/distilbert-base-uncased-finetuned-squad-d5716d28
|
mbateman
| 2022-02-11T09:26:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
shahukareem/wav2vec2-xls-r-1b-dv
|
shahukareem
| 2022-02-11T08:15:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"dv",
"robust-speech-event",
"model_for_talk",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- dv
- robust-speech-event
- model_for_talk
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: dv
metrics:
- name: Test WER
type: wer
value: 21.32
- name: Test CER
type: cer
value: 3.43
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-dv
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Wer: 0.2123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.8412 | 0.66 | 400 | 0.7160 | 0.7913 |
| 0.6832 | 1.33 | 800 | 0.3401 | 0.5268 |
| 0.4624 | 1.99 | 1200 | 0.2671 | 0.4683 |
| 0.3832 | 2.65 | 1600 | 0.2395 | 0.4410 |
| 0.3443 | 3.32 | 2000 | 0.2410 | 0.4296 |
| 0.324 | 3.98 | 2400 | 0.2302 | 0.4143 |
| 0.2934 | 4.64 | 2800 | 0.2402 | 0.4136 |
| 0.2773 | 5.31 | 3200 | 0.2134 | 0.4088 |
| 0.2638 | 5.97 | 3600 | 0.2072 | 0.4037 |
| 0.2479 | 6.63 | 4000 | 0.2036 | 0.3876 |
| 0.2424 | 7.3 | 4400 | 0.2037 | 0.3767 |
| 0.2249 | 7.96 | 4800 | 0.1959 | 0.3802 |
| 0.2169 | 8.62 | 5200 | 0.1943 | 0.3813 |
| 0.2109 | 9.29 | 5600 | 0.1944 | 0.3691 |
| 0.1991 | 9.95 | 6000 | 0.1870 | 0.3589 |
| 0.1917 | 10.61 | 6400 | 0.1834 | 0.3485 |
| 0.1862 | 11.28 | 6800 | 0.1857 | 0.3486 |
| 0.1744 | 11.94 | 7200 | 0.1812 | 0.3330 |
| 0.171 | 12.6 | 7600 | 0.1797 | 0.3436 |
| 0.1599 | 13.27 | 8000 | 0.1839 | 0.3319 |
| 0.1597 | 13.93 | 8400 | 0.1737 | 0.3385 |
| 0.1494 | 14.59 | 8800 | 0.1807 | 0.3239 |
| 0.1444 | 15.26 | 9200 | 0.1750 | 0.3155 |
| 0.1382 | 15.92 | 9600 | 0.1705 | 0.3084 |
| 0.1299 | 16.58 | 10000 | 0.1777 | 0.2999 |
| 0.1306 | 17.25 | 10400 | 0.1765 | 0.3056 |
| 0.1239 | 17.91 | 10800 | 0.1676 | 0.2864 |
| 0.1149 | 18.57 | 11200 | 0.1774 | 0.2861 |
| 0.1134 | 19.24 | 11600 | 0.1654 | 0.2699 |
| 0.1101 | 19.9 | 12000 | 0.1621 | 0.2651 |
| 0.1038 | 20.56 | 12400 | 0.1686 | 0.2610 |
| 0.1038 | 21.23 | 12800 | 0.1722 | 0.2559 |
| 0.0988 | 21.89 | 13200 | 0.1708 | 0.2486 |
| 0.0949 | 22.55 | 13600 | 0.1696 | 0.2453 |
| 0.0913 | 23.22 | 14000 | 0.1677 | 0.2424 |
| 0.0879 | 23.88 | 14400 | 0.1640 | 0.2359 |
| 0.0888 | 24.54 | 14800 | 0.1697 | 0.2347 |
| 0.0826 | 25.21 | 15200 | 0.1709 | 0.2314 |
| 0.0819 | 25.87 | 15600 | 0.1679 | 0.2256 |
| 0.0793 | 26.53 | 16000 | 0.1701 | 0.2214 |
| 0.0773 | 27.2 | 16400 | 0.1682 | 0.2176 |
| 0.0783 | 27.86 | 16800 | 0.1685 | 0.2165 |
| 0.074 | 28.52 | 17200 | 0.1688 | 0.2155 |
| 0.0753 | 29.19 | 17600 | 0.1695 | 0.2110 |
| 0.0699 | 29.85 | 18000 | 0.1702 | 0.2123 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lgris/WavLM-large-CORAA-pt
|
lgris
| 2022-02-10T23:21:45Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"generated_from_trainer",
"pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- generated_from_trainer
- pt
model-index:
- name: WavLM-large-CORAA-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WavLM-large-CORAA-pt
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA).
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Wer: 0.3840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 1000 | 1.9230 | 0.9960 |
| 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 |
| 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 |
| 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 |
| 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 |
| 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 |
| 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 |
| 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 |
| 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 |
| 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 |
| 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 |
| 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 |
| 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 |
| 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 |
| 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 |
| 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 |
| 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 |
| 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 |
| 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 |
| 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 |
| 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 |
| 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 |
| 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 |
| 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 |
| 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 |
| 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 |
| 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 |
| 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 |
| 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 |
| 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 |
| 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 |
| 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 |
| 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 |
| 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 |
| 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 |
| 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 |
| 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 |
| 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 |
| 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 |
| 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
|
emre
| 2022-02-10T22:57:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
This model is a fine-tuned version of [emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8](https://huggingface.co/emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Wer: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0402 | 0.67 | 500 | 0.3354 | 0.5681 |
| 0.7265 | 1.33 | 1000 | 0.3181 | 0.5444 |
| 0.6858 | 2.0 | 1500 | 0.3044 | 0.5322 |
| 0.6537 | 2.66 | 2000 | 0.2911 | 0.5217 |
| 0.6337 | 3.33 | 2500 | 0.2874 | 0.5164 |
| 0.6111 | 3.99 | 3000 | 0.2758 | 0.5059 |
| 0.5815 | 4.66 | 3500 | 0.2708 | 0.5010 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
|
emre
| 2022-02-10T22:57:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4813
- Wer: 0.7207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2 | 0.53 | 400 | 3.1949 | 0.9964 |
| 2.9387 | 1.07 | 800 | 2.5015 | 1.0337 |
| 1.5975 | 1.6 | 1200 | 1.0928 | 0.9945 |
| 1.0688 | 2.13 | 1600 | 0.8388 | 0.9390 |
| 0.8977 | 2.66 | 2000 | 0.7106 | 0.8889 |
| 0.789 | 3.2 | 2400 | 0.6051 | 0.8273 |
| 0.7116 | 3.73 | 2800 | 0.5580 | 0.7855 |
| 0.6576 | 4.26 | 3200 | 0.5033 | 0.7433 |
| 0.6002 | 4.79 | 3600 | 0.4813 | 0.7207 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-med
|
emre
| 2022-02-10T22:56:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-med
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 0.4677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8093 | 4.21 | 400 | 2.7831 | 1.0 |
| 0.9881 | 8.42 | 800 | 0.5088 | 0.6681 |
| 0.3519 | 12.63 | 1200 | 0.4496 | 0.6007 |
| 0.2436 | 16.84 | 1600 | 0.4993 | 0.5654 |
| 0.1874 | 21.05 | 2000 | 0.4793 | 0.5530 |
| 0.1561 | 25.26 | 2400 | 0.5187 | 0.5589 |
| 0.1336 | 29.47 | 2800 | 0.5135 | 0.5311 |
| 0.1163 | 33.68 | 3200 | 0.4960 | 0.5143 |
| 0.1056 | 37.89 | 3600 | 0.4795 | 0.5045 |
| 0.0959 | 42.11 | 4000 | 0.4883 | 0.4987 |
| 0.0819 | 46.32 | 4400 | 0.4799 | 0.4903 |
| 0.0756 | 50.53 | 4800 | 0.4822 | 0.4831 |
| 0.0692 | 54.74 | 5200 | 0.4621 | 0.4762 |
| 0.062 | 58.95 | 5600 | 0.4727 | 0.4677 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ibombonato/vit-age-classifier
|
ibombonato
| 2022-02-10T22:06:51Z | 76 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8364999890327454
---
# vit-age-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
squish/BertHarmon
|
squish
| 2022-02-10T21:28:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
thumbnail: "https://en.memesrandom.com/wp-content/uploads/2020/11/juega-ajedrez.jpeg"
widget:
- text: "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]"
- example_title: Empty Board
- text: "6Q1/5k2/3P4/1R3p2/P4P2/7Q/6RK/8 b - - 2 60 Black <MOVE_SEP> [MASK]"
- example_title: Late Game Board
---
# BertHarmon
Research done at Johns Hopkins University by Michael DeLeo
Contact: [email protected]

## Introduction
BertHarmon is a BERT model trained for the task of Chess.

## Sample Usage
```python
from transformers import pipeline
task = pipeline('fill-mask', model='squish/BertHarmon')
task("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]")
```
The base string consists of the FEN_position followed by the player color and a move seperator. Finally with the [MASK] token. The mask token is the algebraic notation for a chess move to be taken givent the current board state in FEN Notation
## Links
[Github](https://github.com/deleomike/NLP-Chess)
[HuggingFace](https://huggingface.co/squish/BertHarmon)
|
huggingtweets/realsophiarobot
|
huggingtweets
| 2022-02-10T20:03:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/realsophiarobot/1644523350998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1489664916508524545/ePAeH8lT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sophia the Robot</div>
<div style="text-align: center; font-size: 14px;">@realsophiarobot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sophia the Robot.
| Data | Sophia the Robot |
| --- | --- |
| Tweets downloaded | 2341 |
| Retweets | 313 |
| Short tweets | 99 |
| Tweets kept | 1929 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rfk5yso3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realsophiarobot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/realsophiarobot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jpbrammer
|
huggingtweets
| 2022-02-10T15:50:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jpbrammer/1644508224660/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190049285842329600/qwCL5mdU_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JP</div>
<div style="text-align: center; font-size: 14px;">@jpbrammer</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JP.
| Data | JP |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 938 |
| Short tweets | 345 |
| Tweets kept | 1923 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13lk57y6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jpbrammer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jpbrammer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
satyaalmasian/temporal_tagger_German_GELECTRA
|
satyaalmasian
| 2022-02-10T15:23:51Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# BERT based temporal tagged
Token classifier for temporal tagging of plain text using German Gelectra model.
# Model description
GELECTRA is a transformer (ELECTRA) model pretrained on a large corpus of German data in a self-supervised fashion. We use GELECTRA for token classification to tag the tokens in text with classes (tags are from english timex3 format):
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output. The repo examples the english models, the german model can be used the same way.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
# Training data
For pre-training we use a large corpus of automatically annotated news articles with heideltime.
We use 2 data sources for fine-tunning. :
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html),automatically translated to gemran,
[KRAUTS dataset](https://github.com/JannikStroetgen/KRAUTS).
# Training procedure
The model is trained from publicly available checkpoints on huggingface (`deepset/gelectra-large`), with a batch size of 192. We use a learning rate of 1e-07 with an Adam optimizer and linear weight decay for pretraining.
For fine-tuning we use a batch size of 16. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 3 different random seeds, this version of the model is the only seed=7.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
junnyu/roformer_base_wwm_cluecorpussmall
|
junnyu
| 2022-02-10T12:26:39Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roformer",
"fill-mask",
"tf2.0",
"paddlepaddle",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: zh
tags:
- roformer
- pytorch
- tf2.0
- paddlepaddle
widget:
- text: "今天[MASK]很好,我想去公园玩!"
---
## 介绍
Pretrained model on 13G Chinese corpus(clue corpus small). Masked language modeling(MLM) and sentence order prediction(SOP) are used as training task.
在13g的clue corpus small数据集上进行的预训练,使用了`Whole Mask LM` 和 `SOP` 任务
训练逻辑参考了这里。https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/language_model/ernie-1.0
## 训练细节:
- paddlepaddle+paddlenlp
- V100 x 4
- batch size 256
- max_seq_len 512
- max_lr 0.0001
- min_lr 0.00001
- weight_decay 0.01
- grad_clip 1.0
- 总共训练的句子```128*30w + 256*15w + 256*14.5w + 256*46.5w + 256*17w = 27648w```
- 约等于512 batch size, 100w步条件下的54%
最终loss:
```python
[2022-02-05 16:05:59,067] [ INFO] - global step 170100, loss: 2.651634932, lm_loss: 2.603405, sop_loss: 0.048229, speed: 1.06 steps/s, ips: 271.68 seqs/s, learning rate: 6.66465e-05, loss_scaling: 137438.96875, num_good_steps: 356, num_bad_steps: 0
[2022-02-05 16:07:28,227] [ INFO] - global step 170200, loss: 2.822231531, lm_loss: 2.662831, sop_loss: 0.159401, speed: 1.12 steps/s, ips: 287.13 seqs/s, learning rate: 6.66263e-05, loss_scaling: 137438.96875, num_good_steps: 59, num_bad_steps: 0
[2022-02-05 16:08:57,346] [ INFO] - global step 170300, loss: 2.710968971, lm_loss: 2.673646, sop_loss: 0.037323, speed: 1.12 steps/s, ips: 287.26 seqs/s, learning rate: 6.66061e-05, loss_scaling: 137438.96875, num_good_steps: 159, num_bad_steps: 0
[2022-02-05 16:10:26,698] [ INFO] - global step 170400, loss: 2.867662907, lm_loss: 2.619032, sop_loss: 0.248631, speed: 1.12 steps/s, ips: 286.51 seqs/s, learning rate: 6.65859e-05, loss_scaling: 137438.96875, num_good_steps: 259, num_bad_steps: 0
[2022-02-05 16:11:55,714] [ INFO] - global step 170500, loss: 3.158756495, lm_loss: 2.953678, sop_loss: 0.205079, speed: 1.12 steps/s, ips: 287.59 seqs/s, learning rate: 6.65657e-05, loss_scaling: 137438.96875, num_good_steps: 359, num_bad_steps: 0
[2022-02-05 16:13:24,869] [ INFO] - global step 170600, loss: 2.860815048, lm_loss: 2.754750, sop_loss: 0.106064, speed: 1.12 steps/s, ips: 287.14 seqs/s, learning rate: 6.65455e-05, loss_scaling: 137438.96875, num_good_steps: 33, num_bad_steps: 0
```
### tf版本
https://github.com/ZhuiyiTechnology/roformer
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## pytorch使用
```python
import torch
from transformers import RoFormerForMaskedLM, BertTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_base_wwm_cluecorpussmall")
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True))
print(pt_outputs_sentence)
# pytorch: 今天[天||人||气||阳||雨]很好,我[想||就||要||也||还]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SetFit/deberta-v3-large__sst2__train-16-8
|
SetFit
| 2022-02-10T11:15:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-16-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.6579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7129 | 1.0 | 7 | 0.7309 | 0.2857 |
| 0.6549 | 2.0 | 14 | 0.7316 | 0.4286 |
| 0.621 | 3.0 | 21 | 0.7131 | 0.5714 |
| 0.3472 | 4.0 | 28 | 0.5703 | 0.4286 |
| 0.2041 | 5.0 | 35 | 0.6675 | 0.5714 |
| 0.031 | 6.0 | 42 | 1.6750 | 0.5714 |
| 0.0141 | 7.0 | 49 | 1.8743 | 0.5714 |
| 0.0055 | 8.0 | 56 | 1.1778 | 0.5714 |
| 0.0024 | 9.0 | 63 | 1.0699 | 0.5714 |
| 0.0019 | 10.0 | 70 | 1.0933 | 0.5714 |
| 0.0012 | 11.0 | 77 | 1.1218 | 0.7143 |
| 0.0007 | 12.0 | 84 | 1.1468 | 0.7143 |
| 0.0006 | 13.0 | 91 | 1.1584 | 0.7143 |
| 0.0006 | 14.0 | 98 | 1.3092 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-8
|
SetFit
| 2022-02-10T09:59:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-8
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7414
- Accuracy: 0.5623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6597 | 1.0 | 3 | 0.7716 | 0.25 |
| 0.6376 | 2.0 | 6 | 0.7802 | 0.25 |
| 0.5857 | 3.0 | 9 | 0.6625 | 0.75 |
| 0.4024 | 4.0 | 12 | 0.5195 | 0.75 |
| 0.2635 | 5.0 | 15 | 0.4222 | 1.0 |
| 0.1714 | 6.0 | 18 | 0.4410 | 0.5 |
| 0.1267 | 7.0 | 21 | 0.7773 | 0.75 |
| 0.0582 | 8.0 | 24 | 0.9070 | 0.75 |
| 0.0374 | 9.0 | 27 | 0.9539 | 0.75 |
| 0.0204 | 10.0 | 30 | 1.0507 | 0.75 |
| 0.012 | 11.0 | 33 | 1.2802 | 0.5 |
| 0.0086 | 12.0 | 36 | 1.4272 | 0.5 |
| 0.0049 | 13.0 | 39 | 1.4803 | 0.5 |
| 0.0039 | 14.0 | 42 | 1.4912 | 0.5 |
| 0.0031 | 15.0 | 45 | 1.5231 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-7
|
SetFit
| 2022-02-10T09:52:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-7
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7037
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6864 | 1.0 | 3 | 0.7800 | 0.25 |
| 0.6483 | 2.0 | 6 | 0.8067 | 0.25 |
| 0.6028 | 3.0 | 9 | 0.8500 | 0.25 |
| 0.4086 | 4.0 | 12 | 1.0661 | 0.25 |
| 0.2923 | 5.0 | 15 | 1.2302 | 0.25 |
| 0.2059 | 6.0 | 18 | 1.0312 | 0.5 |
| 0.1238 | 7.0 | 21 | 1.1271 | 0.5 |
| 0.0711 | 8.0 | 24 | 1.3100 | 0.5 |
| 0.0453 | 9.0 | 27 | 1.4208 | 0.5 |
| 0.0198 | 10.0 | 30 | 1.5988 | 0.5 |
| 0.0135 | 11.0 | 33 | 1.9174 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-2
|
SetFit
| 2022-02-10T08:35:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-2
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6794
- Accuracy: 0.6063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6942 | 1.0 | 3 | 0.7940 | 0.25 |
| 0.6068 | 2.0 | 6 | 0.9326 | 0.25 |
| 0.6553 | 3.0 | 9 | 0.7979 | 0.25 |
| 0.475 | 4.0 | 12 | 0.7775 | 0.25 |
| 0.377 | 5.0 | 15 | 0.7477 | 0.25 |
| 0.3176 | 6.0 | 18 | 0.6856 | 0.75 |
| 0.2708 | 7.0 | 21 | 0.6554 | 0.75 |
| 0.2855 | 8.0 | 24 | 0.8129 | 0.5 |
| 0.148 | 9.0 | 27 | 0.7074 | 0.75 |
| 0.0947 | 10.0 | 30 | 0.7090 | 0.75 |
| 0.049 | 11.0 | 33 | 0.7885 | 0.75 |
| 0.0252 | 12.0 | 36 | 0.9203 | 0.75 |
| 0.0165 | 13.0 | 39 | 1.0937 | 0.75 |
| 0.0084 | 14.0 | 42 | 1.2502 | 0.75 |
| 0.0059 | 15.0 | 45 | 1.3726 | 0.75 |
| 0.0037 | 16.0 | 48 | 1.4784 | 0.75 |
| 0.003 | 17.0 | 51 | 1.5615 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-1
|
SetFit
| 2022-02-10T08:28:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-1
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7020
- Accuracy: 0.5008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6773 | 1.0 | 3 | 0.7822 | 0.25 |
| 0.6587 | 2.0 | 6 | 0.8033 | 0.25 |
| 0.693 | 3.0 | 9 | 0.8101 | 0.25 |
| 0.5979 | 4.0 | 12 | 1.1235 | 0.25 |
| 0.4095 | 5.0 | 15 | 1.3563 | 0.25 |
| 0.2836 | 6.0 | 18 | 1.5325 | 0.5 |
| 0.1627 | 7.0 | 21 | 1.7786 | 0.25 |
| 0.0956 | 8.0 | 24 | 2.0067 | 0.5 |
| 0.0535 | 9.0 | 27 | 2.3351 | 0.5 |
| 0.0315 | 10.0 | 30 | 2.6204 | 0.5 |
| 0.0182 | 11.0 | 33 | 2.8483 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-9
|
SetFit
| 2022-02-10T08:11:34Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1054 | 1.0 | 19 | 1.0938 | 0.35 |
| 1.0338 | 2.0 | 38 | 1.0563 | 0.65 |
| 0.8622 | 3.0 | 57 | 0.9372 | 0.6 |
| 0.5919 | 4.0 | 76 | 0.8461 | 0.6 |
| 0.3357 | 5.0 | 95 | 1.0206 | 0.45 |
| 0.1621 | 6.0 | 114 | 0.9802 | 0.7 |
| 0.0637 | 7.0 | 133 | 1.2434 | 0.65 |
| 0.0261 | 8.0 | 152 | 1.3865 | 0.65 |
| 0.0156 | 9.0 | 171 | 1.4414 | 0.7 |
| 0.01 | 10.0 | 190 | 1.5502 | 0.7 |
| 0.0079 | 11.0 | 209 | 1.6102 | 0.7 |
| 0.0062 | 12.0 | 228 | 1.6525 | 0.7 |
| 0.0058 | 13.0 | 247 | 1.6884 | 0.7 |
| 0.0046 | 14.0 | 266 | 1.7479 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-7
|
SetFit
| 2022-02-10T08:09:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8210
- Accuracy: 0.6305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0989 | 1.0 | 19 | 1.0655 | 0.4 |
| 1.0102 | 2.0 | 38 | 0.9927 | 0.6 |
| 0.8063 | 3.0 | 57 | 0.9117 | 0.5 |
| 0.5284 | 4.0 | 76 | 0.8058 | 0.55 |
| 0.2447 | 5.0 | 95 | 0.8393 | 0.45 |
| 0.098 | 6.0 | 114 | 0.8438 | 0.6 |
| 0.0388 | 7.0 | 133 | 1.1901 | 0.45 |
| 0.0188 | 8.0 | 152 | 1.4429 | 0.45 |
| 0.0121 | 9.0 | 171 | 1.3648 | 0.4 |
| 0.0082 | 10.0 | 190 | 1.4768 | 0.4 |
| 0.0066 | 11.0 | 209 | 1.4830 | 0.45 |
| 0.0057 | 12.0 | 228 | 1.4936 | 0.45 |
| 0.0053 | 13.0 | 247 | 1.5649 | 0.4 |
| 0.0041 | 14.0 | 266 | 1.6306 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4
|
SetFit
| 2022-02-10T08:05:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7384
- Accuracy: 0.724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1013 | 1.0 | 19 | 1.0733 | 0.55 |
| 1.0226 | 2.0 | 38 | 1.0064 | 0.65 |
| 0.8539 | 3.0 | 57 | 0.8758 | 0.75 |
| 0.584 | 4.0 | 76 | 0.6941 | 0.7 |
| 0.2813 | 5.0 | 95 | 0.5151 | 0.7 |
| 0.1122 | 6.0 | 114 | 0.4351 | 0.8 |
| 0.0432 | 7.0 | 133 | 0.4896 | 0.85 |
| 0.0199 | 8.0 | 152 | 0.5391 | 0.85 |
| 0.0126 | 9.0 | 171 | 0.5200 | 0.85 |
| 0.0085 | 10.0 | 190 | 0.5622 | 0.85 |
| 0.0069 | 11.0 | 209 | 0.5950 | 0.85 |
| 0.0058 | 12.0 | 228 | 0.6015 | 0.85 |
| 0.0053 | 13.0 | 247 | 0.6120 | 0.85 |
| 0.0042 | 14.0 | 266 | 0.6347 | 0.85 |
| 0.0039 | 15.0 | 285 | 0.6453 | 0.85 |
| 0.0034 | 16.0 | 304 | 0.6660 | 0.85 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-3
|
SetFit
| 2022-02-10T08:04:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8286
- Accuracy: 0.661
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1041 | 1.0 | 19 | 1.0658 | 0.5 |
| 1.009 | 2.0 | 38 | 0.9892 | 0.7 |
| 0.7925 | 3.0 | 57 | 0.8516 | 0.7 |
| 0.5279 | 4.0 | 76 | 0.7877 | 0.65 |
| 0.2932 | 5.0 | 95 | 0.7592 | 0.65 |
| 0.1166 | 6.0 | 114 | 0.9437 | 0.65 |
| 0.044 | 7.0 | 133 | 1.0315 | 0.75 |
| 0.0197 | 8.0 | 152 | 1.3513 | 0.55 |
| 0.0126 | 9.0 | 171 | 1.1702 | 0.7 |
| 0.0083 | 10.0 | 190 | 1.2272 | 0.7 |
| 0.0068 | 11.0 | 209 | 1.2889 | 0.7 |
| 0.0059 | 12.0 | 228 | 1.3073 | 0.7 |
| 0.0052 | 13.0 | 247 | 1.3595 | 0.7 |
| 0.0041 | 14.0 | 266 | 1.4443 | 0.7 |
| 0.0038 | 15.0 | 285 | 1.4709 | 0.7 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-2
|
SetFit
| 2022-02-10T08:02:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7136
- Accuracy: 0.679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1052 | 1.0 | 19 | 1.0726 | 0.45 |
| 1.0421 | 2.0 | 38 | 1.0225 | 0.5 |
| 0.9173 | 3.0 | 57 | 0.9164 | 0.6 |
| 0.6822 | 4.0 | 76 | 0.8251 | 0.7 |
| 0.4407 | 5.0 | 95 | 0.8908 | 0.5 |
| 0.2367 | 6.0 | 114 | 0.6772 | 0.75 |
| 0.1145 | 7.0 | 133 | 0.7792 | 0.65 |
| 0.0479 | 8.0 | 152 | 1.0657 | 0.6 |
| 0.0186 | 9.0 | 171 | 1.2228 | 0.65 |
| 0.0111 | 10.0 | 190 | 1.1100 | 0.6 |
| 0.0083 | 11.0 | 209 | 1.1991 | 0.65 |
| 0.0067 | 12.0 | 228 | 1.2654 | 0.65 |
| 0.0061 | 13.0 | 247 | 1.2837 | 0.65 |
| 0.0046 | 14.0 | 266 | 1.2860 | 0.6 |
| 0.0043 | 15.0 | 285 | 1.3160 | 0.65 |
| 0.0037 | 16.0 | 304 | 1.3323 | 0.65 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-1
|
SetFit
| 2022-02-10T08:01:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-32-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-32-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0606
- Accuracy: 0.4745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 19 | 1.1045 | 0.2 |
| 0.9967 | 2.0 | 38 | 1.1164 | 0.35 |
| 0.8164 | 3.0 | 57 | 1.1570 | 0.4 |
| 0.5884 | 4.0 | 76 | 1.2403 | 0.35 |
| 0.3322 | 5.0 | 95 | 1.3815 | 0.35 |
| 0.156 | 6.0 | 114 | 1.8102 | 0.3 |
| 0.0576 | 7.0 | 133 | 2.1439 | 0.4 |
| 0.0227 | 8.0 | 152 | 2.4368 | 0.3 |
| 0.0133 | 9.0 | 171 | 2.5994 | 0.4 |
| 0.009 | 10.0 | 190 | 2.7388 | 0.35 |
| 0.0072 | 11.0 | 209 | 2.8287 | 0.35 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-8
|
SetFit
| 2022-02-10T07:58:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0704
- Accuracy: 0.394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1031 | 1.0 | 10 | 1.1286 | 0.1 |
| 1.0648 | 2.0 | 20 | 1.1157 | 0.3 |
| 0.9982 | 3.0 | 30 | 1.1412 | 0.2 |
| 0.9283 | 4.0 | 40 | 1.2053 | 0.2 |
| 0.7958 | 5.0 | 50 | 1.1466 | 0.2 |
| 0.6668 | 6.0 | 60 | 1.1783 | 0.3 |
| 0.5068 | 7.0 | 70 | 1.2992 | 0.3 |
| 0.3741 | 8.0 | 80 | 1.3483 | 0.3 |
| 0.1653 | 9.0 | 90 | 1.4533 | 0.2 |
| 0.0946 | 10.0 | 100 | 1.6292 | 0.2 |
| 0.0569 | 11.0 | 110 | 1.8381 | 0.2 |
| 0.0346 | 12.0 | 120 | 2.0781 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-7
|
SetFit
| 2022-02-10T07:57:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9011
- Accuracy: 0.578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0968 | 1.0 | 10 | 1.1309 | 0.0 |
| 1.0709 | 2.0 | 20 | 1.1237 | 0.1 |
| 0.9929 | 3.0 | 30 | 1.1254 | 0.1 |
| 0.878 | 4.0 | 40 | 1.1206 | 0.5 |
| 0.7409 | 5.0 | 50 | 1.0831 | 0.1 |
| 0.5663 | 6.0 | 60 | 0.9830 | 0.6 |
| 0.4105 | 7.0 | 70 | 0.9919 | 0.5 |
| 0.2912 | 8.0 | 80 | 1.0472 | 0.6 |
| 0.1013 | 9.0 | 90 | 1.1617 | 0.4 |
| 0.0611 | 10.0 | 100 | 1.2789 | 0.6 |
| 0.039 | 11.0 | 110 | 1.4091 | 0.4 |
| 0.0272 | 12.0 | 120 | 1.4974 | 0.4 |
| 0.0189 | 13.0 | 130 | 1.4845 | 0.5 |
| 0.018 | 14.0 | 140 | 1.4924 | 0.5 |
| 0.0131 | 15.0 | 150 | 1.5206 | 0.6 |
| 0.0116 | 16.0 | 160 | 1.5858 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-6
|
SetFit
| 2022-02-10T07:55:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8331
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0881 | 1.0 | 10 | 1.1248 | 0.1 |
| 1.0586 | 2.0 | 20 | 1.1162 | 0.2 |
| 0.9834 | 3.0 | 30 | 1.1199 | 0.3 |
| 0.9271 | 4.0 | 40 | 1.0740 | 0.3 |
| 0.7663 | 5.0 | 50 | 1.0183 | 0.5 |
| 0.6042 | 6.0 | 60 | 1.0259 | 0.5 |
| 0.4482 | 7.0 | 70 | 0.8699 | 0.7 |
| 0.3072 | 8.0 | 80 | 1.0615 | 0.5 |
| 0.1458 | 9.0 | 90 | 1.0164 | 0.5 |
| 0.0838 | 10.0 | 100 | 1.0620 | 0.5 |
| 0.055 | 11.0 | 110 | 1.1829 | 0.5 |
| 0.0347 | 12.0 | 120 | 1.2815 | 0.4 |
| 0.0244 | 13.0 | 130 | 1.2607 | 0.6 |
| 0.0213 | 14.0 | 140 | 1.3695 | 0.5 |
| 0.0169 | 15.0 | 150 | 1.4397 | 0.5 |
| 0.0141 | 16.0 | 160 | 1.4388 | 0.6 |
| 0.0122 | 17.0 | 170 | 1.4242 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-5
|
SetFit
| 2022-02-10T07:54:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9907
- Accuracy: 0.49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0941 | 1.0 | 10 | 1.1287 | 0.2 |
| 1.0481 | 2.0 | 20 | 1.1136 | 0.2 |
| 0.9498 | 3.0 | 30 | 1.1200 | 0.2 |
| 0.8157 | 4.0 | 40 | 1.0771 | 0.2 |
| 0.65 | 5.0 | 50 | 0.9733 | 0.4 |
| 0.5021 | 6.0 | 60 | 1.0626 | 0.4 |
| 0.3358 | 7.0 | 70 | 1.0787 | 0.4 |
| 0.2017 | 8.0 | 80 | 1.3183 | 0.4 |
| 0.088 | 9.0 | 90 | 1.2204 | 0.5 |
| 0.0527 | 10.0 | 100 | 1.6892 | 0.4 |
| 0.0337 | 11.0 | 110 | 1.6967 | 0.5 |
| 0.0238 | 12.0 | 120 | 1.5436 | 0.5 |
| 0.0183 | 13.0 | 130 | 1.7447 | 0.4 |
| 0.0159 | 14.0 | 140 | 1.8999 | 0.4 |
| 0.014 | 15.0 | 150 | 1.9004 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-16-4
|
SetFit
| 2022-02-10T07:53:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-16-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0903
- Accuracy: 0.4805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0974 | 1.0 | 10 | 1.1139 | 0.1 |
| 1.0637 | 2.0 | 20 | 1.0988 | 0.1 |
| 0.9758 | 3.0 | 30 | 1.1013 | 0.1 |
| 0.9012 | 4.0 | 40 | 1.0769 | 0.3 |
| 0.6993 | 5.0 | 50 | 1.0484 | 0.6 |
| 0.5676 | 6.0 | 60 | 1.0223 | 0.6 |
| 0.4069 | 7.0 | 70 | 0.9190 | 0.6 |
| 0.3192 | 8.0 | 80 | 1.1370 | 0.6 |
| 0.1112 | 9.0 | 90 | 1.1728 | 0.6 |
| 0.07 | 10.0 | 100 | 1.1998 | 0.6 |
| 0.0397 | 11.0 | 110 | 1.3700 | 0.6 |
| 0.027 | 12.0 | 120 | 1.3329 | 0.6 |
| 0.021 | 13.0 | 130 | 1.2697 | 0.6 |
| 0.0177 | 14.0 | 140 | 1.4195 | 0.6 |
| 0.0142 | 15.0 | 150 | 1.5342 | 0.6 |
| 0.0118 | 16.0 | 160 | 1.5999 | 0.6 |
| 0.0108 | 17.0 | 170 | 1.6327 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner
|
akshaychaudhary
| 2022-02-10T07:47:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-hypertuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hypertuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5683
- Precision: 0.3398
- Recall: 0.6481
- F1: 0.4459
- Accuracy: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.3566 | 0.2913 | 0.5556 | 0.3822 | 0.8585 |
| No log | 2.0 | 168 | 0.4698 | 0.3366 | 0.6296 | 0.4387 | 0.8730 |
| No log | 3.0 | 252 | 0.5683 | 0.3398 | 0.6481 | 0.4459 | 0.8762 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-8
|
SetFit
| 2022-02-10T07:46:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0005
- Accuracy: 0.518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1029 | 1.0 | 5 | 1.1295 | 0.0 |
| 1.0472 | 2.0 | 10 | 1.1531 | 0.0 |
| 1.054 | 3.0 | 15 | 1.1475 | 0.0 |
| 0.9366 | 4.0 | 20 | 1.1515 | 0.0 |
| 0.8698 | 5.0 | 25 | 1.1236 | 0.4 |
| 0.8148 | 6.0 | 30 | 1.0716 | 0.6 |
| 0.6884 | 7.0 | 35 | 1.0662 | 0.6 |
| 0.5641 | 8.0 | 40 | 1.0671 | 0.6 |
| 0.5 | 9.0 | 45 | 1.0282 | 0.6 |
| 0.3882 | 10.0 | 50 | 1.0500 | 0.6 |
| 0.3522 | 11.0 | 55 | 1.1381 | 0.6 |
| 0.2492 | 12.0 | 60 | 1.1278 | 0.6 |
| 0.2063 | 13.0 | 65 | 1.0731 | 0.6 |
| 0.1608 | 14.0 | 70 | 1.1339 | 0.6 |
| 0.1448 | 15.0 | 75 | 1.1892 | 0.6 |
| 0.0925 | 16.0 | 80 | 1.1840 | 0.6 |
| 0.0768 | 17.0 | 85 | 1.0608 | 0.6 |
| 0.0585 | 18.0 | 90 | 1.1073 | 0.6 |
| 0.0592 | 19.0 | 95 | 1.3134 | 0.6 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-2
|
SetFit
| 2022-02-10T07:41:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1019
- Accuracy: 0.139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1082 | 1.0 | 5 | 1.1432 | 0.0 |
| 1.0524 | 2.0 | 10 | 1.1613 | 0.0 |
| 1.0641 | 3.0 | 15 | 1.1547 | 0.0 |
| 0.9592 | 4.0 | 20 | 1.1680 | 0.0 |
| 0.9085 | 5.0 | 25 | 1.1762 | 0.0 |
| 0.8508 | 6.0 | 30 | 1.1809 | 0.2 |
| 0.7263 | 7.0 | 35 | 1.1912 | 0.2 |
| 0.6448 | 8.0 | 40 | 1.2100 | 0.2 |
| 0.5378 | 9.0 | 45 | 1.2037 | 0.2 |
| 0.5031 | 10.0 | 50 | 1.2096 | 0.2 |
| 0.4041 | 11.0 | 55 | 1.2203 | 0.2 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__hate_speech_offensive__train-8-1
|
SetFit
| 2022-02-10T07:40:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__hate_speech_offensive__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__hate_speech_offensive__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1013
- Accuracy: 0.0915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0866 | 1.0 | 5 | 1.1363 | 0.0 |
| 1.0439 | 2.0 | 10 | 1.1803 | 0.0 |
| 1.0227 | 3.0 | 15 | 1.2162 | 0.2 |
| 0.9111 | 4.0 | 20 | 1.2619 | 0.0 |
| 0.8243 | 5.0 | 25 | 1.2929 | 0.2 |
| 0.7488 | 6.0 | 30 | 1.3010 | 0.2 |
| 0.62 | 7.0 | 35 | 1.3011 | 0.2 |
| 0.5054 | 8.0 | 40 | 1.2931 | 0.4 |
| 0.4191 | 9.0 | 45 | 1.3274 | 0.4 |
| 0.4107 | 10.0 | 50 | 1.3259 | 0.4 |
| 0.3376 | 11.0 | 55 | 1.2800 | 0.4 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-7
|
SetFit
| 2022-02-10T07:34:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6736
- Accuracy: 0.5931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 |
| 0.651 | 2.0 | 26 | 0.6682 | 0.6923 |
| 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 |
| 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 |
| 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 |
| 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 |
| 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 |
| 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 |
| 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 |
| 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 |
| 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 |
| 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 |
| 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-5
|
SetFit
| 2022-02-10T07:32:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6248
- Accuracy: 0.6826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7136 | 1.0 | 13 | 0.6850 | 0.5385 |
| 0.6496 | 2.0 | 26 | 0.6670 | 0.6154 |
| 0.5895 | 3.0 | 39 | 0.6464 | 0.7692 |
| 0.4271 | 4.0 | 52 | 0.6478 | 0.7692 |
| 0.2182 | 5.0 | 65 | 0.6809 | 0.6923 |
| 0.103 | 6.0 | 78 | 0.9119 | 0.6923 |
| 0.0326 | 7.0 | 91 | 1.0718 | 0.6923 |
| 0.0154 | 8.0 | 104 | 1.0721 | 0.7692 |
| 0.0087 | 9.0 | 117 | 1.1416 | 0.7692 |
| 0.0067 | 10.0 | 130 | 1.2088 | 0.7692 |
| 0.005 | 11.0 | 143 | 1.2656 | 0.7692 |
| 0.0037 | 12.0 | 156 | 1.3104 | 0.7692 |
| 0.0032 | 13.0 | 169 | 1.3428 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-4
|
SetFit
| 2022-02-10T07:32:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5001
- Accuracy: 0.7650
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7175 | 1.0 | 13 | 0.6822 | 0.5385 |
| 0.6559 | 2.0 | 26 | 0.6533 | 0.6154 |
| 0.6052 | 3.0 | 39 | 0.5762 | 0.7692 |
| 0.4587 | 4.0 | 52 | 0.4477 | 0.8462 |
| 0.2459 | 5.0 | 65 | 0.4288 | 0.7692 |
| 0.1001 | 6.0 | 78 | 0.5219 | 0.7692 |
| 0.0308 | 7.0 | 91 | 0.8540 | 0.7692 |
| 0.014 | 8.0 | 104 | 0.7789 | 0.7692 |
| 0.0083 | 9.0 | 117 | 0.7996 | 0.7692 |
| 0.0064 | 10.0 | 130 | 0.8342 | 0.7692 |
| 0.0049 | 11.0 | 143 | 0.8612 | 0.7692 |
| 0.0036 | 12.0 | 156 | 0.8834 | 0.7692 |
| 0.0032 | 13.0 | 169 | 0.9067 | 0.7692 |
| 0.003 | 14.0 | 182 | 0.9332 | 0.7692 |
| 0.0028 | 15.0 | 195 | 0.9511 | 0.7692 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-32-2
|
SetFit
| 2022-02-10T07:30:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-32-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-32-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4805
- Accuracy: 0.7699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7124 | 1.0 | 13 | 0.6882 | 0.5385 |
| 0.6502 | 2.0 | 26 | 0.6715 | 0.5385 |
| 0.6001 | 3.0 | 39 | 0.6342 | 0.6154 |
| 0.455 | 4.0 | 52 | 0.5713 | 0.7692 |
| 0.2605 | 5.0 | 65 | 0.5562 | 0.7692 |
| 0.1258 | 6.0 | 78 | 0.6799 | 0.7692 |
| 0.0444 | 7.0 | 91 | 0.8096 | 0.7692 |
| 0.0175 | 8.0 | 104 | 0.9281 | 0.6923 |
| 0.0106 | 9.0 | 117 | 0.9826 | 0.6923 |
| 0.0077 | 10.0 | 130 | 1.0254 | 0.7692 |
| 0.0056 | 11.0 | 143 | 1.0667 | 0.7692 |
| 0.0042 | 12.0 | 156 | 1.1003 | 0.7692 |
| 0.0036 | 13.0 | 169 | 1.1299 | 0.7692 |
| 0.0034 | 14.0 | 182 | 1.1623 | 0.6923 |
| 0.003 | 15.0 | 195 | 1.1938 | 0.6923 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-9
|
SetFit
| 2022-02-10T07:27:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6915
- Accuracy: 0.5157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6868 | 1.0 | 7 | 0.7121 | 0.1429 |
| 0.6755 | 2.0 | 14 | 0.7234 | 0.1429 |
| 0.6389 | 3.0 | 21 | 0.7384 | 0.2857 |
| 0.5575 | 4.0 | 28 | 0.7884 | 0.2857 |
| 0.4972 | 5.0 | 35 | 0.7767 | 0.4286 |
| 0.2821 | 6.0 | 42 | 0.8275 | 0.4286 |
| 0.1859 | 7.0 | 49 | 0.9283 | 0.2857 |
| 0.1388 | 8.0 | 56 | 0.9384 | 0.4286 |
| 0.078 | 9.0 | 63 | 1.1973 | 0.4286 |
| 0.0462 | 10.0 | 70 | 1.4016 | 0.4286 |
| 0.0319 | 11.0 | 77 | 1.4087 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-8
|
SetFit
| 2022-02-10T07:26:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6895
- Accuracy: 0.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6899 | 1.0 | 7 | 0.7055 | 0.2857 |
| 0.6793 | 2.0 | 14 | 0.7205 | 0.2857 |
| 0.6291 | 3.0 | 21 | 0.7460 | 0.2857 |
| 0.5659 | 4.0 | 28 | 0.8041 | 0.2857 |
| 0.5607 | 5.0 | 35 | 0.7785 | 0.4286 |
| 0.3349 | 6.0 | 42 | 0.8163 | 0.4286 |
| 0.2436 | 7.0 | 49 | 0.9101 | 0.2857 |
| 0.1734 | 8.0 | 56 | 0.8632 | 0.5714 |
| 0.1122 | 9.0 | 63 | 0.9851 | 0.5714 |
| 0.0661 | 10.0 | 70 | 1.0835 | 0.5714 |
| 0.0407 | 11.0 | 77 | 1.1656 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-6
|
SetFit
| 2022-02-10T07:24:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8356
- Accuracy: 0.6480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6978 | 1.0 | 7 | 0.6807 | 0.4286 |
| 0.6482 | 2.0 | 14 | 0.6775 | 0.4286 |
| 0.6051 | 3.0 | 21 | 0.6623 | 0.5714 |
| 0.486 | 4.0 | 28 | 0.6710 | 0.5714 |
| 0.4612 | 5.0 | 35 | 0.5325 | 0.7143 |
| 0.2233 | 6.0 | 42 | 0.4992 | 0.7143 |
| 0.1328 | 7.0 | 49 | 0.4753 | 0.7143 |
| 0.0905 | 8.0 | 56 | 0.2416 | 1.0 |
| 0.0413 | 9.0 | 63 | 0.2079 | 1.0 |
| 0.0356 | 10.0 | 70 | 0.2234 | 0.8571 |
| 0.0217 | 11.0 | 77 | 0.2639 | 0.8571 |
| 0.0121 | 12.0 | 84 | 0.2977 | 0.8571 |
| 0.0105 | 13.0 | 91 | 0.3468 | 0.8571 |
| 0.0085 | 14.0 | 98 | 0.3912 | 0.8571 |
| 0.0077 | 15.0 | 105 | 0.4000 | 0.8571 |
| 0.0071 | 16.0 | 112 | 0.4015 | 0.8571 |
| 0.0078 | 17.0 | 119 | 0.3865 | 0.8571 |
| 0.0059 | 18.0 | 126 | 0.3603 | 0.8571 |
| 0.0051 | 19.0 | 133 | 0.3231 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-5
|
SetFit
| 2022-02-10T07:23:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-5
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6537
- Accuracy: 0.6332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6925 | 1.0 | 7 | 0.6966 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7045 | 0.2857 |
| 0.6404 | 3.0 | 21 | 0.7205 | 0.2857 |
| 0.555 | 4.0 | 28 | 0.7548 | 0.2857 |
| 0.5179 | 5.0 | 35 | 0.6745 | 0.5714 |
| 0.3038 | 6.0 | 42 | 0.7260 | 0.5714 |
| 0.2089 | 7.0 | 49 | 0.8016 | 0.5714 |
| 0.1303 | 8.0 | 56 | 0.8202 | 0.5714 |
| 0.0899 | 9.0 | 63 | 0.9966 | 0.5714 |
| 0.0552 | 10.0 | 70 | 1.1887 | 0.5714 |
| 0.0333 | 11.0 | 77 | 1.2163 | 0.5714 |
| 0.0169 | 12.0 | 84 | 1.2874 | 0.5714 |
| 0.0136 | 13.0 | 91 | 1.3598 | 0.5714 |
| 0.0103 | 14.0 | 98 | 1.4237 | 0.5714 |
| 0.0089 | 15.0 | 105 | 1.4758 | 0.5714 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-3
|
SetFit
| 2022-02-10T07:21:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7887
- Accuracy: 0.6458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6928 | 1.0 | 7 | 0.6973 | 0.4286 |
| 0.675 | 2.0 | 14 | 0.7001 | 0.4286 |
| 0.6513 | 3.0 | 21 | 0.6959 | 0.4286 |
| 0.5702 | 4.0 | 28 | 0.6993 | 0.4286 |
| 0.5389 | 5.0 | 35 | 0.6020 | 0.7143 |
| 0.3386 | 6.0 | 42 | 0.5326 | 0.5714 |
| 0.2596 | 7.0 | 49 | 0.4943 | 0.7143 |
| 0.1633 | 8.0 | 56 | 0.3589 | 0.8571 |
| 0.1086 | 9.0 | 63 | 0.2924 | 0.8571 |
| 0.0641 | 10.0 | 70 | 0.2687 | 0.8571 |
| 0.0409 | 11.0 | 77 | 0.2202 | 0.8571 |
| 0.0181 | 12.0 | 84 | 0.2445 | 0.8571 |
| 0.0141 | 13.0 | 91 | 0.2885 | 0.8571 |
| 0.0108 | 14.0 | 98 | 0.3069 | 0.8571 |
| 0.009 | 15.0 | 105 | 0.3006 | 0.8571 |
| 0.0084 | 16.0 | 112 | 0.2834 | 0.8571 |
| 0.0088 | 17.0 | 119 | 0.2736 | 0.8571 |
| 0.0062 | 18.0 | 126 | 0.2579 | 0.8571 |
| 0.0058 | 19.0 | 133 | 0.2609 | 0.8571 |
| 0.0057 | 20.0 | 140 | 0.2563 | 0.8571 |
| 0.0049 | 21.0 | 147 | 0.2582 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-16-0
|
SetFit
| 2022-02-10T07:18:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-16-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6903
- Accuracy: 0.5091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6934 | 1.0 | 7 | 0.7142 | 0.2857 |
| 0.6703 | 2.0 | 14 | 0.7379 | 0.2857 |
| 0.6282 | 3.0 | 21 | 0.7769 | 0.2857 |
| 0.5193 | 4.0 | 28 | 0.8799 | 0.2857 |
| 0.5104 | 5.0 | 35 | 0.8380 | 0.4286 |
| 0.2504 | 6.0 | 42 | 0.8622 | 0.4286 |
| 0.1794 | 7.0 | 49 | 0.9227 | 0.4286 |
| 0.1156 | 8.0 | 56 | 0.8479 | 0.4286 |
| 0.0709 | 9.0 | 63 | 1.0929 | 0.2857 |
| 0.0471 | 10.0 | 70 | 1.2189 | 0.2857 |
| 0.0288 | 11.0 | 77 | 1.2026 | 0.4286 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-8
|
SetFit
| 2022-02-10T07:16:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-8
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7061 | 1.0 | 3 | 0.6899 | 0.75 |
| 0.6627 | 2.0 | 6 | 0.7026 | 0.25 |
| 0.644 | 3.0 | 9 | 0.7158 | 0.25 |
| 0.6087 | 4.0 | 12 | 0.7325 | 0.25 |
| 0.5602 | 5.0 | 15 | 0.7555 | 0.25 |
| 0.5034 | 6.0 | 18 | 0.7725 | 0.25 |
| 0.4672 | 7.0 | 21 | 0.7983 | 0.25 |
| 0.403 | 8.0 | 24 | 0.8314 | 0.25 |
| 0.3571 | 9.0 | 27 | 0.8555 | 0.25 |
| 0.2792 | 10.0 | 30 | 0.9065 | 0.25 |
| 0.2373 | 11.0 | 33 | 0.9286 | 0.25 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-3
|
SetFit
| 2022-02-10T07:10:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6914
- Accuracy: 0.5195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6931 | 1.0 | 3 | 0.7039 | 0.25 |
| 0.6615 | 2.0 | 6 | 0.7186 | 0.25 |
| 0.653 | 3.0 | 9 | 0.7334 | 0.25 |
| 0.601 | 4.0 | 12 | 0.7592 | 0.25 |
| 0.5555 | 5.0 | 15 | 0.7922 | 0.25 |
| 0.4832 | 6.0 | 18 | 0.8179 | 0.25 |
| 0.4565 | 7.0 | 21 | 0.8285 | 0.25 |
| 0.3996 | 8.0 | 24 | 0.8559 | 0.25 |
| 0.3681 | 9.0 | 27 | 0.8586 | 0.5 |
| 0.2901 | 10.0 | 30 | 0.8646 | 0.5 |
| 0.241 | 11.0 | 33 | 0.8524 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__sst2__train-8-1
|
SetFit
| 2022-02-10T07:09:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__sst2__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6930
- Accuracy: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7082 | 1.0 | 3 | 0.7048 | 0.25 |
| 0.6761 | 2.0 | 6 | 0.7249 | 0.25 |
| 0.6653 | 3.0 | 9 | 0.7423 | 0.25 |
| 0.6212 | 4.0 | 12 | 0.7727 | 0.25 |
| 0.5932 | 5.0 | 15 | 0.8098 | 0.25 |
| 0.5427 | 6.0 | 18 | 0.8496 | 0.25 |
| 0.5146 | 7.0 | 21 | 0.8992 | 0.25 |
| 0.4356 | 8.0 | 24 | 0.9494 | 0.25 |
| 0.4275 | 9.0 | 27 | 0.9694 | 0.25 |
| 0.3351 | 10.0 | 30 | 0.9968 | 0.25 |
| 0.2812 | 11.0 | 33 | 1.0056 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
shoubhik/wav2vec2-xls-r-300m-hindi-lm
|
shoubhik
| 2022-02-10T06:24:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
wav2vec2-xls-r-300m-hindi-lm
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the 'Openslr Multilingual and code-switching ASR challenge' dataset and 'mozilla-foundation/common_voice_7_0' dataset. It achieves the following results on the evaluation set:
With language model:
WER: 0.3421149821494522
CER: 0.12281403517543969
With out language model:
WER: 0.4642989043456851
CER: 0.15765197064963313
- robust-speech-event
|
speech-seq2seq/wav2vec2-2-bert-large
|
speech-seq2seq
| 2022-02-10T06:06:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9670
- Wer: 1.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.7599 | 0.28 | 500 | 6.8755 | 1.2551 |
| 6.5943 | 0.56 | 1000 | 6.7702 | 1.5878 |
| 6.3146 | 0.84 | 1500 | 6.6981 | 1.6627 |
| 6.6112 | 1.12 | 2000 | 6.6760 | 1.9853 |
| 6.6894 | 1.4 | 2500 | 6.6323 | 1.9376 |
| 6.5525 | 1.68 | 3000 | 6.6185 | 1.9383 |
| 6.571 | 1.96 | 3500 | 6.6126 | 1.9580 |
| 6.3363 | 2.24 | 4000 | 6.7869 | 1.9818 |
| 6.5832 | 2.52 | 4500 | 6.9096 | 2.0025 |
| 6.3523 | 2.8 | 5000 | 6.9670 | 1.9878 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
fznmhmmd/distilbert-base-uncased-finetuned-cola
|
fznmhmmd
| 2022-02-10T04:00:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5543972545286807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Matthews Correlation: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5256 | 1.0 | 535 | 0.5419 | 0.4248 |
| 0.3486 | 2.0 | 1070 | 0.5187 | 0.4999 |
| 0.2406 | 3.0 | 1605 | 0.6580 | 0.5054 |
| 0.1692 | 4.0 | 2140 | 0.7455 | 0.5403 |
| 0.1343 | 5.0 | 2675 | 0.8273 | 0.5544 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
fznmhmmd/bert-base-cased-wikitext2
|
fznmhmmd
| 2022-02-10T00:37:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0964 | 1.0 | 2346 | 7.0532 |
| 6.9055 | 2.0 | 4692 | 6.8710 |
| 6.8574 | 3.0 | 7038 | 6.8917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Crives/distilbert-base-uncased-finetuned-emotion
|
Crives
| 2022-02-09T22:08:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215538311282218
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7814 | 1.0 | 250 | 0.3105 | 0.907 | 0.9046 |
| 0.2401 | 2.0 | 500 | 0.2175 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-9
|
SetFit
| 2022-02-09T20:34:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4865
- Accuracy: 0.778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7024 | 1.0 | 3 | 0.6843 | 0.75 |
| 0.67 | 2.0 | 6 | 0.6807 | 0.5 |
| 0.6371 | 3.0 | 9 | 0.6677 | 0.5 |
| 0.585 | 4.0 | 12 | 0.6649 | 0.5 |
| 0.5122 | 5.0 | 15 | 0.6707 | 0.5 |
| 0.4379 | 6.0 | 18 | 0.6660 | 0.5 |
| 0.4035 | 7.0 | 21 | 0.6666 | 0.5 |
| 0.323 | 8.0 | 24 | 0.6672 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.6534 | 0.5 |
| 0.21 | 10.0 | 30 | 0.6456 | 0.5 |
| 0.1735 | 11.0 | 33 | 0.6325 | 0.5 |
| 0.133 | 12.0 | 36 | 0.6214 | 0.5 |
| 0.0986 | 13.0 | 39 | 0.6351 | 0.5 |
| 0.081 | 14.0 | 42 | 0.6495 | 0.5 |
| 0.0638 | 15.0 | 45 | 0.6671 | 0.5 |
| 0.0449 | 16.0 | 48 | 0.7156 | 0.5 |
| 0.0399 | 17.0 | 51 | 0.7608 | 0.5 |
| 0.0314 | 18.0 | 54 | 0.7796 | 0.5 |
| 0.0243 | 19.0 | 57 | 0.7789 | 0.5 |
| 0.0227 | 20.0 | 60 | 0.7684 | 0.5 |
| 0.0221 | 21.0 | 63 | 0.7628 | 0.5 |
| 0.0192 | 22.0 | 66 | 0.7728 | 0.5 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-7
|
SetFit
| 2022-02-09T20:30:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- Accuracy: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7044 | 1.0 | 3 | 0.6909 | 0.5 |
| 0.6678 | 2.0 | 6 | 0.6901 | 0.5 |
| 0.6336 | 3.0 | 9 | 0.6807 | 0.5 |
| 0.5926 | 4.0 | 12 | 0.6726 | 0.5 |
| 0.5221 | 5.0 | 15 | 0.6648 | 0.5 |
| 0.4573 | 6.0 | 18 | 0.6470 | 0.5 |
| 0.4177 | 7.0 | 21 | 0.6251 | 0.5 |
| 0.3252 | 8.0 | 24 | 0.5994 | 0.5 |
| 0.2831 | 9.0 | 27 | 0.5529 | 0.5 |
| 0.213 | 10.0 | 30 | 0.5078 | 0.75 |
| 0.1808 | 11.0 | 33 | 0.4521 | 1.0 |
| 0.1355 | 12.0 | 36 | 0.3996 | 1.0 |
| 0.1027 | 13.0 | 39 | 0.3557 | 1.0 |
| 0.0862 | 14.0 | 42 | 0.3121 | 1.0 |
| 0.0682 | 15.0 | 45 | 0.2828 | 1.0 |
| 0.0517 | 16.0 | 48 | 0.2603 | 1.0 |
| 0.0466 | 17.0 | 51 | 0.2412 | 1.0 |
| 0.038 | 18.0 | 54 | 0.2241 | 1.0 |
| 0.0276 | 19.0 | 57 | 0.2096 | 1.0 |
| 0.0246 | 20.0 | 60 | 0.1969 | 1.0 |
| 0.0249 | 21.0 | 63 | 0.1859 | 1.0 |
| 0.0201 | 22.0 | 66 | 0.1770 | 1.0 |
| 0.018 | 23.0 | 69 | 0.1703 | 1.0 |
| 0.0164 | 24.0 | 72 | 0.1670 | 1.0 |
| 0.0172 | 25.0 | 75 | 0.1639 | 1.0 |
| 0.0135 | 26.0 | 78 | 0.1604 | 1.0 |
| 0.014 | 27.0 | 81 | 0.1585 | 1.0 |
| 0.0108 | 28.0 | 84 | 0.1569 | 1.0 |
| 0.0116 | 29.0 | 87 | 0.1549 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.1532 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.1513 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1503 | 1.0 |
| 0.01 | 33.0 | 99 | 0.1490 | 1.0 |
| 0.0079 | 34.0 | 102 | 0.1479 | 1.0 |
| 0.0097 | 35.0 | 105 | 0.1466 | 1.0 |
| 0.0112 | 36.0 | 108 | 0.1458 | 1.0 |
| 0.0091 | 37.0 | 111 | 0.1457 | 1.0 |
| 0.0098 | 38.0 | 114 | 0.1454 | 1.0 |
| 0.0076 | 39.0 | 117 | 0.1451 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1448 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1445 | 1.0 |
| 0.0096 | 42.0 | 126 | 0.1440 | 1.0 |
| 0.0081 | 43.0 | 129 | 0.1430 | 1.0 |
| 0.0083 | 44.0 | 132 | 0.1424 | 1.0 |
| 0.0088 | 45.0 | 135 | 0.1418 | 1.0 |
| 0.0077 | 46.0 | 138 | 0.1414 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1413 | 1.0 |
| 0.0084 | 48.0 | 144 | 0.1412 | 1.0 |
| 0.0072 | 49.0 | 147 | 0.1411 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1411 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-6
|
SetFit
| 2022-02-09T20:28:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6075
- Accuracy: 0.7485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7163 | 1.0 | 3 | 0.6923 | 0.5 |
| 0.6648 | 2.0 | 6 | 0.6838 | 0.5 |
| 0.6329 | 3.0 | 9 | 0.6747 | 0.75 |
| 0.5836 | 4.0 | 12 | 0.6693 | 0.5 |
| 0.5287 | 5.0 | 15 | 0.6670 | 0.25 |
| 0.4585 | 6.0 | 18 | 0.6517 | 0.5 |
| 0.415 | 7.0 | 21 | 0.6290 | 0.5 |
| 0.3353 | 8.0 | 24 | 0.6019 | 0.5 |
| 0.2841 | 9.0 | 27 | 0.5613 | 0.75 |
| 0.2203 | 10.0 | 30 | 0.5222 | 1.0 |
| 0.1743 | 11.0 | 33 | 0.4769 | 1.0 |
| 0.1444 | 12.0 | 36 | 0.4597 | 1.0 |
| 0.1079 | 13.0 | 39 | 0.4462 | 1.0 |
| 0.0891 | 14.0 | 42 | 0.4216 | 1.0 |
| 0.0704 | 15.0 | 45 | 0.3880 | 1.0 |
| 0.0505 | 16.0 | 48 | 0.3663 | 1.0 |
| 0.0428 | 17.0 | 51 | 0.3536 | 1.0 |
| 0.0356 | 18.0 | 54 | 0.3490 | 1.0 |
| 0.0283 | 19.0 | 57 | 0.3531 | 1.0 |
| 0.025 | 20.0 | 60 | 0.3595 | 1.0 |
| 0.0239 | 21.0 | 63 | 0.3594 | 1.0 |
| 0.0202 | 22.0 | 66 | 0.3521 | 1.0 |
| 0.0168 | 23.0 | 69 | 0.3475 | 1.0 |
| 0.0159 | 24.0 | 72 | 0.3458 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.3409 | 1.0 |
| 0.0132 | 26.0 | 78 | 0.3360 | 1.0 |
| 0.0137 | 27.0 | 81 | 0.3302 | 1.0 |
| 0.0112 | 28.0 | 84 | 0.3235 | 1.0 |
| 0.0113 | 29.0 | 87 | 0.3178 | 1.0 |
| 0.0111 | 30.0 | 90 | 0.3159 | 1.0 |
| 0.0113 | 31.0 | 93 | 0.3108 | 1.0 |
| 0.0107 | 32.0 | 96 | 0.3101 | 1.0 |
| 0.0101 | 33.0 | 99 | 0.3100 | 1.0 |
| 0.0083 | 34.0 | 102 | 0.3110 | 1.0 |
| 0.0092 | 35.0 | 105 | 0.3117 | 1.0 |
| 0.0102 | 36.0 | 108 | 0.3104 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.3086 | 1.0 |
| 0.0092 | 38.0 | 114 | 0.3047 | 1.0 |
| 0.0072 | 39.0 | 117 | 0.3024 | 1.0 |
| 0.0079 | 40.0 | 120 | 0.3014 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.2983 | 1.0 |
| 0.0091 | 42.0 | 126 | 0.2948 | 1.0 |
| 0.0077 | 43.0 | 129 | 0.2915 | 1.0 |
| 0.0085 | 44.0 | 132 | 0.2890 | 1.0 |
| 0.009 | 45.0 | 135 | 0.2870 | 1.0 |
| 0.0073 | 46.0 | 138 | 0.2856 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.2844 | 1.0 |
| 0.0076 | 48.0 | 144 | 0.2841 | 1.0 |
| 0.0065 | 49.0 | 147 | 0.2836 | 1.0 |
| 0.0081 | 50.0 | 150 | 0.2835 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-2
|
SetFit
| 2022-02-09T20:21:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3081
- Accuracy: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7146 | 1.0 | 3 | 0.6798 | 0.75 |
| 0.6737 | 2.0 | 6 | 0.6847 | 0.75 |
| 0.6519 | 3.0 | 9 | 0.6783 | 0.75 |
| 0.6105 | 4.0 | 12 | 0.6812 | 0.25 |
| 0.5463 | 5.0 | 15 | 0.6869 | 0.25 |
| 0.4922 | 6.0 | 18 | 0.6837 | 0.5 |
| 0.4543 | 7.0 | 21 | 0.6716 | 0.5 |
| 0.3856 | 8.0 | 24 | 0.6613 | 0.75 |
| 0.3475 | 9.0 | 27 | 0.6282 | 0.75 |
| 0.2717 | 10.0 | 30 | 0.6045 | 0.75 |
| 0.2347 | 11.0 | 33 | 0.5620 | 0.75 |
| 0.1979 | 12.0 | 36 | 0.5234 | 1.0 |
| 0.1535 | 13.0 | 39 | 0.4771 | 1.0 |
| 0.1332 | 14.0 | 42 | 0.4277 | 1.0 |
| 0.1041 | 15.0 | 45 | 0.3785 | 1.0 |
| 0.082 | 16.0 | 48 | 0.3318 | 1.0 |
| 0.0672 | 17.0 | 51 | 0.2885 | 1.0 |
| 0.0538 | 18.0 | 54 | 0.2568 | 1.0 |
| 0.0412 | 19.0 | 57 | 0.2356 | 1.0 |
| 0.0361 | 20.0 | 60 | 0.2217 | 1.0 |
| 0.0303 | 21.0 | 63 | 0.2125 | 1.0 |
| 0.0268 | 22.0 | 66 | 0.2060 | 1.0 |
| 0.0229 | 23.0 | 69 | 0.2015 | 1.0 |
| 0.0215 | 24.0 | 72 | 0.1989 | 1.0 |
| 0.0211 | 25.0 | 75 | 0.1969 | 1.0 |
| 0.0172 | 26.0 | 78 | 0.1953 | 1.0 |
| 0.0165 | 27.0 | 81 | 0.1935 | 1.0 |
| 0.0132 | 28.0 | 84 | 0.1923 | 1.0 |
| 0.0146 | 29.0 | 87 | 0.1914 | 1.0 |
| 0.0125 | 30.0 | 90 | 0.1904 | 1.0 |
| 0.0119 | 31.0 | 93 | 0.1897 | 1.0 |
| 0.0122 | 32.0 | 96 | 0.1886 | 1.0 |
| 0.0118 | 33.0 | 99 | 0.1875 | 1.0 |
| 0.0097 | 34.0 | 102 | 0.1866 | 1.0 |
| 0.0111 | 35.0 | 105 | 0.1861 | 1.0 |
| 0.0111 | 36.0 | 108 | 0.1855 | 1.0 |
| 0.0102 | 37.0 | 111 | 0.1851 | 1.0 |
| 0.0109 | 38.0 | 114 | 0.1851 | 1.0 |
| 0.0085 | 39.0 | 117 | 0.1854 | 1.0 |
| 0.0089 | 40.0 | 120 | 0.1855 | 1.0 |
| 0.0092 | 41.0 | 123 | 0.1863 | 1.0 |
| 0.0105 | 42.0 | 126 | 0.1868 | 1.0 |
| 0.0089 | 43.0 | 129 | 0.1874 | 1.0 |
| 0.0091 | 44.0 | 132 | 0.1877 | 1.0 |
| 0.0096 | 45.0 | 135 | 0.1881 | 1.0 |
| 0.0081 | 46.0 | 138 | 0.1881 | 1.0 |
| 0.0086 | 47.0 | 141 | 0.1883 | 1.0 |
| 0.009 | 48.0 | 144 | 0.1884 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
SetFit/distilbert-base-uncased__subj__train-8-1
|
SetFit
| 2022-02-09T20:19:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased__subj__train-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__subj__train-8-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5488
- Accuracy: 0.791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.703 | 1.0 | 3 | 0.6906 | 0.5 |
| 0.666 | 2.0 | 6 | 0.6945 | 0.25 |
| 0.63 | 3.0 | 9 | 0.6885 | 0.5 |
| 0.588 | 4.0 | 12 | 0.6888 | 0.25 |
| 0.5181 | 5.0 | 15 | 0.6899 | 0.25 |
| 0.4508 | 6.0 | 18 | 0.6770 | 0.5 |
| 0.4025 | 7.0 | 21 | 0.6579 | 0.5 |
| 0.3361 | 8.0 | 24 | 0.6392 | 0.5 |
| 0.2919 | 9.0 | 27 | 0.6113 | 0.5 |
| 0.2151 | 10.0 | 30 | 0.5774 | 0.75 |
| 0.1728 | 11.0 | 33 | 0.5248 | 0.75 |
| 0.1313 | 12.0 | 36 | 0.4824 | 0.75 |
| 0.1046 | 13.0 | 39 | 0.4456 | 0.75 |
| 0.0858 | 14.0 | 42 | 0.4076 | 0.75 |
| 0.0679 | 15.0 | 45 | 0.3755 | 0.75 |
| 0.0485 | 16.0 | 48 | 0.3422 | 0.75 |
| 0.0416 | 17.0 | 51 | 0.3055 | 0.75 |
| 0.0358 | 18.0 | 54 | 0.2731 | 1.0 |
| 0.0277 | 19.0 | 57 | 0.2443 | 1.0 |
| 0.0234 | 20.0 | 60 | 0.2187 | 1.0 |
| 0.0223 | 21.0 | 63 | 0.1960 | 1.0 |
| 0.0187 | 22.0 | 66 | 0.1762 | 1.0 |
| 0.017 | 23.0 | 69 | 0.1629 | 1.0 |
| 0.0154 | 24.0 | 72 | 0.1543 | 1.0 |
| 0.0164 | 25.0 | 75 | 0.1476 | 1.0 |
| 0.0131 | 26.0 | 78 | 0.1423 | 1.0 |
| 0.0139 | 27.0 | 81 | 0.1387 | 1.0 |
| 0.0107 | 28.0 | 84 | 0.1360 | 1.0 |
| 0.0108 | 29.0 | 87 | 0.1331 | 1.0 |
| 0.0105 | 30.0 | 90 | 0.1308 | 1.0 |
| 0.0106 | 31.0 | 93 | 0.1276 | 1.0 |
| 0.0104 | 32.0 | 96 | 0.1267 | 1.0 |
| 0.0095 | 33.0 | 99 | 0.1255 | 1.0 |
| 0.0076 | 34.0 | 102 | 0.1243 | 1.0 |
| 0.0094 | 35.0 | 105 | 0.1235 | 1.0 |
| 0.0103 | 36.0 | 108 | 0.1228 | 1.0 |
| 0.0086 | 37.0 | 111 | 0.1231 | 1.0 |
| 0.0094 | 38.0 | 114 | 0.1236 | 1.0 |
| 0.0074 | 39.0 | 117 | 0.1240 | 1.0 |
| 0.0085 | 40.0 | 120 | 0.1246 | 1.0 |
| 0.0079 | 41.0 | 123 | 0.1253 | 1.0 |
| 0.0088 | 42.0 | 126 | 0.1248 | 1.0 |
| 0.0082 | 43.0 | 129 | 0.1244 | 1.0 |
| 0.0082 | 44.0 | 132 | 0.1234 | 1.0 |
| 0.0082 | 45.0 | 135 | 0.1223 | 1.0 |
| 0.0071 | 46.0 | 138 | 0.1212 | 1.0 |
| 0.0073 | 47.0 | 141 | 0.1208 | 1.0 |
| 0.0081 | 48.0 | 144 | 0.1205 | 1.0 |
| 0.0067 | 49.0 | 147 | 0.1202 | 1.0 |
| 0.0077 | 50.0 | 150 | 0.1202 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Maunish/ecomm-sbert
|
Maunish
| 2022-02-09T17:47:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
justin871030/bert-base-uncased-goemotions-group-finetuned
|
justin871030
| 2022-02-09T17:22:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 70%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions)
|
justin871030/bert-base-uncased-goemotions-original-finetuned
|
justin871030
| 2022-02-09T17:17:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"go-emotion",
"text-classification",
"en",
"dataset:go_emotions",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
## Results
Best Result of `Macro F1` - 53%
## Tutorial Link
- [GitHub](https://github.com/justin871030/GoEmotions)
|
huggingtweets/man24car
|
huggingtweets
| 2022-02-09T16:06:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/man24car/1644422772686/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475950695329275905/8MOXbfHE_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">FastCarMan24</div>
<div style="text-align: center; font-size: 14px;">@man24car</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from FastCarMan24.
| Data | FastCarMan24 |
| --- | --- |
| Tweets downloaded | 860 |
| Retweets | 211 |
| Short tweets | 159 |
| Tweets kept | 490 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oq7rh5p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @man24car's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/19d4nhfe) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/19d4nhfe/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/man24car')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
am-shb/xlm-roberta-base-pretrained
|
am-shb
| 2022-02-09T15:53:08Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
fznmhmmd/gpt2-wikitext2
|
fznmhmmd
| 2022-02-09T15:44:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5571 | 1.0 | 2249 | 6.4684 |
| 6.1921 | 2.0 | 4498 | 6.1984 |
| 6.0016 | 3.0 | 6747 | 6.1112 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-bert-base-uncased
|
jgammack
| 2022-02-09T15:33:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SAE-bert-base-uncased
results: []
widget:
- text: "Wind [MASK] was detected coming from the car door closure system."
example_title: "Closure system"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [jgammack/SAE-door-abstracts](https://huggingface.co/datasets/jgammack/SAE-door-abstracts) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5967 | 1.0 | 80 | 2.3409 |
| 2.4881 | 2.0 | 160 | 2.2707 |
| 2.3567 | 3.0 | 240 | 2.3134 |
| 2.3413 | 4.0 | 320 | 2.2592 |
| 2.3006 | 5.0 | 400 | 2.2351 |
| 2.2568 | 6.0 | 480 | 2.2556 |
| 2.2303 | 7.0 | 560 | 2.2546 |
| 2.1892 | 8.0 | 640 | 2.1868 |
| 2.1851 | 9.0 | 720 | 2.2073 |
| 2.1738 | 10.0 | 800 | 2.1344 |
| 2.1673 | 11.0 | 880 | 2.1927 |
| 2.1518 | 12.0 | 960 | 2.1844 |
| 2.1142 | 13.0 | 1040 | 2.1466 |
| 2.1343 | 14.0 | 1120 | 2.2024 |
| 2.1332 | 15.0 | 1200 | 2.1035 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
victen/xlm-roberta-base-finetuned-panx-de
|
victen
| 2022-02-09T10:49:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8591260810195721
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1352
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.257 | 1.0 | 525 | 0.1512 | 0.8302 |
| 0.1305 | 2.0 | 1050 | 0.1401 | 0.8447 |
| 0.0817 | 3.0 | 1575 | 0.1352 | 0.8591 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ArBert/bert-base-uncased-finetuned-ner
|
ArBert
| 2022-02-09T10:46:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0905
- Precision: 0.9068
- Recall: 0.9200
- F1: 0.9133
- Accuracy: 0.9787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 |
| 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 |
| 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
youzanai/clip-product-title-chinese
|
youzanai
| 2022-02-09T08:59:51Z | 12 | 9 |
transformers
|
[
"transformers",
"pytorch",
"clip_chinese_model",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
<!--
* @Description:
* @Version:
* @Author: Hardy
* @Date: 2022-02-09 15:13:53
* @LastEditors: Hardy
* @LastEditTime: 2022-02-09 16:59:01
-->
<br />
<p align="center">
<h1 align="center">clip-product-title-chinese</h1>
</p>
## 基于有赞商品图片和标题语料训练的clip模型。
## Usage
使用模型前,请 git clone https://github.com/youzanai/trexpark.git
```python
import torch
from src.clip.clip import ClipProcesserChinese, ClipChineseModel
import requests
from PIL import Image
clip_processor = ClipProcesserChinese.from_pretrained('youzanai/clip-product-title-chinese')
model = ClipChineseModel.from_pretrained('youzanai/clip-product-title-chinese')
url = 'http://img.yzcdn.cn/upload_files/2015/04/21/0140dac4657f874f2acff9294b28088c.jpg'
img = Image.open(requests.get(url, stream=True).raw).convert('RGB')
imgs = [img]
texts = ['运动鞋', '红色连衣裙', '黑色连衣裙', '大衣', '文具']
f = clip_processor(texts, imgs, return_tensors='pt', truncation=True, padding=True)
del f['token_type_ids']
with torch.no_grad():
out = model(**f)
logits_per_image, logits_per_text = out['logits_per_image'], out['logits_per_text']
print(logits_per_image.softmax(dim=-1).cpu().detach().numpy())
# 结果: [[1.1700666e-07 9.9948394e-01 5.1582896e-04 4.7687358e-11 6.9604440e-08]]
```
|
Duael/RRHood
|
Duael
| 2022-02-09T04:54:18Z | 0 | 0 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
license: artistic-2.0
---
|
Sense-X/uniformer_video
|
Sense-X
| 2022-02-09T03:49:34Z | 0 | 5 | null |
[
"vision",
"video-classification",
"dataset:kinetics-400",
"dataset:kinetics-600",
"dataset:something-something-v1",
"dataset:something-something-v2",
"arxiv:2201.04676",
"license:mit",
"region:us"
] |
video-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- vision
- video-classification
datasets:
- kinetics-400
- kinetics-600
- something-something-v1
- something-something-v2
---
# UniFormer (video model)
UniFormer models are trained on [Kinetics](https://deepmind.com/research/open-source/kinetics) and [Something-Something](https://20bn.com/datasets/something-something) at resolution 224x224.
It was introduced in the paper [UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning](https://arxiv.org/abs/2201.04676) by Li et al,
and first released in [this repository](https://github.com/Sense-X/UniFormer).
## Model description
The UniFormer is a type of Vision Transformer, which can seamlessly integrate merits of convolution and self-attention in a concise transformer format.
It adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation.
Without any extra training data,
UniFormer achieves **86.3** top-1 accuracy on ImageNet-1K classification.
With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks.
UniFormer obtains **82.9/84.8** top-1 accuracy on Kinetics-400/600,
and **60.9/71.2** top-1 accuracy on Something-Something V1/V2 video classification tasks.
It also achieves **53.8** box AP and **46.4** mask AP on COCO object detection task,
**50.8** mIoU on ADE20K semantic segmentation task,
and **77.4** AP on COCO pose estimation task.

[Source](https://paperswithcode.com/paper/uniformer-unified-transformer-for-efficient)
## Intended uses & limitations
You can use the raw model for video classification.
We now only upload the powerful models with **single clip**.
More models can be found in [the model hub](https://github.com/Sense-X/UniFormer/tree/main/video_classification).
### Kinetics
| Model | #Frame | Sampling Stride | FLOPs | K400 Top-1 | K600 Top-1 |
| ----------- | ------ | --------------- | ----- | ---------- | ---------- |
| UniFormer-S | 16x1x1 | 8 | 41.8G | 78.4 | 80.8 |
| UniFormer-B | 16x1x1 | 8 | 96.7G | 79.3 | 81.7 |
| UniFormer-B | 32x1x1 | 4 | 259G | 80.9 | 82.4 |
### Something-Something
| Model | #Frame | FLOPs | SSV1 Top-1 | SSV2 Top-1 |
| ----------- | ------ | ----- | ---------- | ---------- |
| UniFormer-S | 16x1x1 | 41.8G | 54.4 | 65.0 |
| UniFormer-B | 32x1x1 | 259G | 58.0 | 67.5 |
### How to use
You can followed our [demo](https://huggingface.co/spaces/Sense-X/uniformer_video_demo/tree/main) to use our models.
```python
from uniformer import uniformer_small
from kinetics_class_index import kinetics_classnames
model = uniformer_small()
# load state
model_path = hf_hub_download(repo_id="Sense-X/uniformer_video", filename="uniformer_small_k400_16x8.pth")
state_dict = torch.load(model_path, map_location='cpu')
model.load_state_dict(state_dict)
# set to eval mode
model = model.to(device)
model = model.eval()
# please refer to the following url to process video of Kinetics:
# https://huggingface.co/spaces/Sense-X/uniformer_video_demo/blob/main/app.py
vid = load_video(video)
# model predicts one of the 400 Kintics classes
prediction = model(vid)
predicted_class_idx = prediction.flatten().argmax(-1).item()
print("Predicted class:", kinetics_classnames[str(predicted_class_idx)])
```
### BibTeX entry and citation info
```bibtex
@misc{li2022uniformer,
title={UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning},
author={Kunchang Li and Yali Wang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao},
year={2022},
eprint={2201.04676},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
thyagosme/gpt2-wikitext2
|
thyagosme
| 2022-02-09T03:17:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5576 | 1.0 | 2249 | 6.4681 |
| 6.1905 | 2.0 | 4498 | 6.1976 |
| 6.0005 | 3.0 | 6747 | 6.1095 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
|
vuiseng9
| 2022-02-08T22:58:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.7001
eval_f1 = 87.9777
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt/raw/main/nncf_bert_squad_qat.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-26750 \
--nncf_config $MODELROOT/nncf_bert_squad_qat.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
### tile-alignment
to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq```
|
jgammack/MTL-bert-base-uncased-ww-squad
|
jgammack
| 2022-02-08T22:16:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: MTL-bert-base-uncased-ww-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww-squad
This model is a fine-tuned version of [jgammack/MTL-bert-base-uncased-ww](https://huggingface.co/jgammack/MTL-bert-base-uncased-ww) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kSaluja/autonlp-tele_new_5k-557515810
|
kSaluja
| 2022-02-08T20:58:51Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autonlp",
"en",
"dataset:kSaluja/autonlp-data-tele_new_5k",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- kSaluja/autonlp-data-tele_new_5k
co2_eq_emissions: 2.96638567287195
---
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 557515810
- CO2 Emissions (in grams): 2.96638567287195
## Validation Metrics
- Loss: 0.12897901237010956
- Accuracy: 0.9713212700580403
- Precision: 0.9475614228089475
- Recall: 0.96274217585693
- F1: 0.9550914803178709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/kSaluja/autonlp-tele_new_5k-557515810
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("kSaluja/autonlp-tele_new_5k-557515810", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
Mofe/speech-sprint-test
|
Mofe
| 2022-02-08T18:32:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 207.6065
- Wer: 1.5484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
|
espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
|
espnet
| 2022-02-08T18:13:51Z | 2 | 1 |
espnet
|
[
"espnet",
"audio",
"speech-translation",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- speech-translation
language: noinfo
datasets:
- iwslt22_dialect
license: cc-by-4.0
---
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 12:54:12 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
pen2_st_model_valid.acc.ave|13.9|44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp_len = 36614 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: true
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text.tc.en
- text
- text
- - dump/raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text.tc.en
- text
- text
- - dump/raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jgammack/MTL-bert-base-uncased-ww
|
jgammack
| 2022-02-08T17:50:13Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-bert-base-uncased-ww
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-bert-base-uncased-ww
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2964 | 1.0 | 99 | 2.9560 |
| 3.0419 | 2.0 | 198 | 2.8336 |
| 2.8979 | 3.0 | 297 | 2.8009 |
| 2.8815 | 4.0 | 396 | 2.7394 |
| 2.8373 | 5.0 | 495 | 2.6813 |
| 2.741 | 6.0 | 594 | 2.6270 |
| 2.6877 | 7.0 | 693 | 2.5216 |
| 2.6823 | 8.0 | 792 | 2.5485 |
| 2.6326 | 9.0 | 891 | 2.5690 |
| 2.5976 | 10.0 | 990 | 2.6336 |
| 2.6009 | 11.0 | 1089 | 2.5919 |
| 2.5615 | 12.0 | 1188 | 2.4264 |
| 2.5826 | 13.0 | 1287 | 2.5562 |
| 2.5693 | 14.0 | 1386 | 2.5529 |
| 2.5494 | 15.0 | 1485 | 2.5300 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tau/tavbert-he
|
tau
| 2022-02-08T16:38:50Z | 60 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"language model",
"he",
"dataset:oscar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: he
tags:
- roberta
- language model
datasets:
- oscar
---
# TavBERT base model
A Hebrew BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020).
### How to use
```python
import numpy as np
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("tau/tavbert-he")
tokenizer = AutoTokenizer.from_pretrained("tau/tavbert-he")
def mask_sentence(sent, span_len=5):
start_pos = np.random.randint(0, len(sent) - span_len)
masked_sent = sent[:start_pos] + '[MASK]' * span_len + sent[start_pos + span_len:]
print("Masked sentence:", masked_sent)
output = model(**tokenizer.encode_plus(masked_sent,
return_tensors='pt'))['logits'][0][1:-1]
preds = [int(x) for x in torch.argmax(torch.softmax(output, axis=1), axis=1)[start_pos:start_pos + span_len]]
pred_sent = sent[:start_pos] + ''.join(tokenizer.convert_ids_to_tokens(preds)) + sent[start_pos + span_len:]
print("Model's prediction:", pred_sent)
```
## Training data
OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences).
|
jgammack/MTL-distilbert-base-uncased-squad
|
jgammack
| 2022-02-08T15:58:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: MTL-distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased-squad
This model is a fine-tuned version of [jgammack/MTL-distilbert-base-uncased](https://huggingface.co/jgammack/MTL-distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tesemnikov-av/rubert-ner-toxicity
|
tesemnikov-av
| 2022-02-08T12:52:32Z | 80 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: "Ну ты и придурок!!"
---
NER Toxic models
Fine-tuning [cointegrated/rubert-tiny-toxicity](https://huggingface.co/cointegrated/rubert-tiny-toxicity) model on data from [toxic_dataset_ner](https://huggingface.co/datasets/tesemnikov-av/toxic_dataset_ner)
language: RU
```python
!pip install transformers > /dev/null
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
pipeline
)
model = AutoModelForTokenClassification.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
tokenizer = AutoTokenizer.from_pretrained('tesemnikov-av/rubert-ner-toxicity')
pipe = pipeline(model=model, tokenizer=tokenizer, task='ner', aggregation_strategy='average')
text = "Они охриневшие там все придурки!!"
print(text)
print(pipe(text))
```
|
jgammack/SAE-roberta-base-squad
|
jgammack
| 2022-02-08T11:17:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: SAE-roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base-squad
This model is a fine-tuned version of [jgammack/SAE-roberta-base](https://huggingface.co/jgammack/SAE-roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HarrisDePerceptron/xls-r-300m-ur-cv8-hi
|
HarrisDePerceptron
| 2022-02-08T10:55:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
language:
- ur
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5443
- Wer: 0.7030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000388
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 750
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 10.7052 | 1.96 | 100 | 3.4683 | 1.0 |
| 3.2395 | 3.92 | 200 | 3.1489 | 1.0 |
| 2.9951 | 5.88 | 300 | 2.9823 | 1.0007 |
| 2.3574 | 7.84 | 400 | 1.2614 | 0.7598 |
| 1.7287 | 9.8 | 500 | 1.1817 | 0.7421 |
| 1.6144 | 11.76 | 600 | 1.1315 | 0.7321 |
| 1.5598 | 13.73 | 700 | 1.2322 | 0.7550 |
| 1.5418 | 15.69 | 800 | 1.2721 | 0.7819 |
| 1.4578 | 17.65 | 900 | 1.1710 | 0.7531 |
| 1.4311 | 19.61 | 1000 | 1.2042 | 0.7491 |
| 1.3483 | 21.57 | 1100 | 1.1702 | 0.7465 |
| 1.3078 | 23.53 | 1200 | 1.1963 | 0.7421 |
| 1.2576 | 25.49 | 1300 | 1.1501 | 0.7280 |
| 1.2173 | 27.45 | 1400 | 1.2526 | 0.7299 |
| 1.2217 | 29.41 | 1500 | 1.2479 | 0.7310 |
| 1.1536 | 31.37 | 1600 | 1.2567 | 0.7432 |
| 1.0939 | 33.33 | 1700 | 1.2801 | 0.7247 |
| 1.0745 | 35.29 | 1800 | 1.2340 | 0.7151 |
| 1.0454 | 37.25 | 1900 | 1.2372 | 0.7151 |
| 1.0101 | 39.22 | 2000 | 1.2461 | 0.7376 |
| 0.9833 | 41.18 | 2100 | 1.2553 | 0.7269 |
| 0.9314 | 43.14 | 2200 | 1.2372 | 0.7015 |
| 0.9147 | 45.1 | 2300 | 1.3035 | 0.7358 |
| 0.8758 | 47.06 | 2400 | 1.2598 | 0.7092 |
| 0.8356 | 49.02 | 2500 | 1.2557 | 0.7144 |
| 0.8105 | 50.98 | 2600 | 1.2619 | 0.7236 |
| 0.7947 | 52.94 | 2700 | 1.3994 | 0.7491 |
| 0.7623 | 54.9 | 2800 | 1.2932 | 0.7133 |
| 0.7282 | 56.86 | 2900 | 1.2799 | 0.7089 |
| 0.7108 | 58.82 | 3000 | 1.3615 | 0.7148 |
| 0.6896 | 60.78 | 3100 | 1.3129 | 0.7041 |
| 0.6496 | 62.75 | 3200 | 1.4050 | 0.6934 |
| 0.6075 | 64.71 | 3300 | 1.3571 | 0.7026 |
| 0.6242 | 66.67 | 3400 | 1.3369 | 0.7063 |
| 0.5865 | 68.63 | 3500 | 1.4368 | 0.7140 |
| 0.5721 | 70.59 | 3600 | 1.4224 | 0.7066 |
| 0.5475 | 72.55 | 3700 | 1.4798 | 0.7118 |
| 0.5086 | 74.51 | 3800 | 1.5107 | 0.7232 |
| 0.4958 | 76.47 | 3900 | 1.4849 | 0.7089 |
| 0.5046 | 78.43 | 4000 | 1.4451 | 0.7114 |
| 0.4694 | 80.39 | 4100 | 1.4674 | 0.7089 |
| 0.4386 | 82.35 | 4200 | 1.5245 | 0.7103 |
| 0.4516 | 84.31 | 4300 | 1.5032 | 0.7103 |
| 0.4113 | 86.27 | 4400 | 1.5246 | 0.7196 |
| 0.3972 | 88.24 | 4500 | 1.5318 | 0.7114 |
| 0.4006 | 90.2 | 4600 | 1.5543 | 0.6982 |
| 0.4014 | 92.16 | 4700 | 1.5442 | 0.7048 |
| 0.3672 | 94.12 | 4800 | 1.5542 | 0.7137 |
| 0.3666 | 96.08 | 4900 | 1.5414 | 0.7018 |
| 0.3574 | 98.04 | 5000 | 1.5465 | 0.7059 |
| 0.3428 | 100.0 | 5100 | 1.5443 | 0.7030 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
imfiba1991/gpt2-wikitext2
|
imfiba1991
| 2022-02-08T10:53:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 13 | 8.1476 |
| No log | 2.0 | 26 | 7.4435 |
| No log | 3.0 | 39 | 7.2082 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jkang/espnet2_mini_librispeech_diar
|
jkang
| 2022-02-08T08:33:52Z | 3 | 0 |
espnet
|
[
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- diarization
language: noinfo
datasets:
- mini_librispeech
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `jkang/espnet2_mini_librispeech_diar`
This model was trained by jaekookang using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e08a89e0a43db7fc12bec835c62a000ad10bd417
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_mini_librispeech_diar
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 16:41:16 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `e08a89e0a43db7fc12bec835c62a000ad10bd417`
- Commit date: `Sun Feb 6 18:54:20 2022 -0500`
## diar_train_diar_raw
### DER
dev_clean_2_ns2_beta2_500
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.0|31.39|
|result_th0.3_med1_collar0.0|31.78|
|result_th0.4_med11_collar0.0|29.99|
|result_th0.4_med1_collar0.0|30.61|
|result_th0.5_med11_collar0.0|29.28|
|result_th0.5_med1_collar0.0|30.19|
|result_th0.6_med11_collar0.0|29.50|
|result_th0.6_med1_collar0.0|30.66|
|result_th0.7_med11_collar0.0|30.90|
|result_th0.7_med1_collar0.0|32.38|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
attractor: null
attractor_conf: {}
required:
- output_dir
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
woohyun/sdssd
|
woohyun
| 2022-02-08T08:03:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
jgammack/roberta-base-squad
|
jgammack
| 2022-02-08T07:39:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
birgermoell/wav2vec2-speechdat
|
birgermoell
| 2022-02-08T06:44:20Z | 66 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
model-index:
- name: wav2vec2-speechdat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-speechdat
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4578
- Wer: 0.2927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| No log | 0.01 | 100 | 3.6252 | 1.0 |
| No log | 0.02 | 200 | 3.1906 | 1.0 |
| No log | 0.03 | 300 | 3.1090 | 1.0 |
| No log | 0.04 | 400 | 1.8796 | 0.9955 |
| 6.2575 | 0.05 | 500 | 1.3515 | 0.9058 |
| 6.2575 | 0.06 | 600 | 1.1209 | 0.8328 |
| 6.2575 | 0.07 | 700 | 1.1404 | 0.8309 |
| 6.2575 | 0.09 | 800 | 1.0599 | 0.8021 |
| 6.2575 | 0.1 | 900 | 0.9901 | 0.8335 |
| 0.7737 | 0.11 | 1000 | 0.8846 | 0.7400 |
| 0.7737 | 0.12 | 1100 | 0.9971 | 0.7820 |
| 0.7737 | 0.13 | 1200 | 0.8665 | 0.7123 |
| 0.7737 | 0.14 | 1300 | 0.8490 | 0.7366 |
| 0.7737 | 0.15 | 1400 | 0.8250 | 0.6765 |
| 0.6183 | 0.16 | 1500 | 0.8291 | 0.6965 |
| 0.6183 | 0.17 | 1600 | 0.7946 | 0.6823 |
| 0.6183 | 0.18 | 1700 | 0.8239 | 0.6894 |
| 0.6183 | 0.19 | 1800 | 0.8282 | 0.6796 |
| 0.6183 | 0.2 | 1900 | 0.7645 | 0.6518 |
| 0.561 | 0.21 | 2000 | 0.7530 | 0.6367 |
| 0.561 | 0.22 | 2100 | 0.7296 | 0.6177 |
| 0.561 | 0.24 | 2200 | 0.7527 | 0.6498 |
| 0.561 | 0.25 | 2300 | 0.7210 | 0.6316 |
| 0.561 | 0.26 | 2400 | 0.7938 | 0.6757 |
| 0.5402 | 0.27 | 2500 | 0.7485 | 0.6372 |
| 0.5402 | 0.28 | 2600 | 0.7146 | 0.6133 |
| 0.5402 | 0.29 | 2700 | 0.7308 | 0.6626 |
| 0.5402 | 0.3 | 2800 | 0.7078 | 0.5949 |
| 0.5402 | 0.31 | 2900 | 0.7679 | 0.6373 |
| 0.5303 | 0.32 | 3000 | 0.7263 | 0.6502 |
| 0.5303 | 0.33 | 3100 | 0.6613 | 0.5846 |
| 0.5303 | 0.34 | 3200 | 0.6784 | 0.5783 |
| 0.5303 | 0.35 | 3300 | 0.6908 | 0.5833 |
| 0.5303 | 0.36 | 3400 | 0.6595 | 0.5826 |
| 0.503 | 0.37 | 3500 | 0.6717 | 0.5938 |
| 0.503 | 0.39 | 3600 | 0.6938 | 0.5791 |
| 0.503 | 0.4 | 3700 | 0.6677 | 0.6052 |
| 0.503 | 0.41 | 3800 | 0.6544 | 0.5554 |
| 0.503 | 0.42 | 3900 | 0.6514 | 0.5728 |
| 0.4959 | 0.43 | 4000 | 0.6847 | 0.6188 |
| 0.4959 | 0.44 | 4100 | 0.6626 | 0.5869 |
| 0.4959 | 0.45 | 4200 | 0.6670 | 0.5700 |
| 0.4959 | 0.46 | 4300 | 0.6596 | 0.5846 |
| 0.4959 | 0.47 | 4400 | 0.6523 | 0.5468 |
| 0.4824 | 0.48 | 4500 | 0.6392 | 0.5688 |
| 0.4824 | 0.49 | 4600 | 0.6561 | 0.5687 |
| 0.4824 | 0.5 | 4700 | 0.6697 | 0.5817 |
| 0.4824 | 0.51 | 4800 | 0.6348 | 0.5608 |
| 0.4824 | 0.52 | 4900 | 0.6561 | 0.5600 |
| 0.4714 | 0.54 | 5000 | 0.6522 | 0.6181 |
| 0.4714 | 0.55 | 5100 | 0.6858 | 0.5921 |
| 0.4714 | 0.56 | 5200 | 0.6706 | 0.5497 |
| 0.4714 | 0.57 | 5300 | 0.7123 | 0.5768 |
| 0.4714 | 0.58 | 5400 | 0.6599 | 0.6100 |
| 0.471 | 0.59 | 5500 | 0.6421 | 0.5626 |
| 0.471 | 0.6 | 5600 | 0.6395 | 0.5753 |
| 0.471 | 0.61 | 5700 | 0.6788 | 0.5481 |
| 0.471 | 0.62 | 5800 | 0.6386 | 0.5516 |
| 0.471 | 0.63 | 5900 | 0.6694 | 0.5913 |
| 0.4707 | 0.64 | 6000 | 0.6251 | 0.5699 |
| 0.4707 | 0.65 | 6100 | 0.6243 | 0.5567 |
| 0.4707 | 0.66 | 6200 | 0.6645 | 0.5629 |
| 0.4707 | 0.67 | 6300 | 0.6296 | 0.5895 |
| 0.4707 | 0.69 | 6400 | 0.6078 | 0.5183 |
| 0.4632 | 0.7 | 6500 | 0.6270 | 0.5619 |
| 0.4632 | 0.71 | 6600 | 0.6050 | 0.5336 |
| 0.4632 | 0.72 | 6700 | 0.6185 | 0.5449 |
| 0.4632 | 0.73 | 6800 | 0.6281 | 0.5645 |
| 0.4632 | 0.74 | 6900 | 0.5877 | 0.5084 |
| 0.4514 | 0.75 | 7000 | 0.6199 | 0.5403 |
| 0.4514 | 0.76 | 7100 | 0.6293 | 0.5275 |
| 0.4514 | 0.77 | 7200 | 0.6290 | 0.5447 |
| 0.4514 | 0.78 | 7300 | 0.6130 | 0.5373 |
| 0.4514 | 0.79 | 7400 | 0.6138 | 0.5285 |
| 0.4457 | 0.8 | 7500 | 0.6040 | 0.5259 |
| 0.4457 | 0.81 | 7600 | 0.6220 | 0.5686 |
| 0.4457 | 0.82 | 7700 | 0.5915 | 0.5164 |
| 0.4457 | 0.84 | 7800 | 0.6270 | 0.5289 |
| 0.4457 | 0.85 | 7900 | 0.6224 | 0.5515 |
| 0.4458 | 0.86 | 8000 | 0.6161 | 0.5323 |
| 0.4458 | 0.87 | 8100 | 0.5827 | 0.5122 |
| 0.4458 | 0.88 | 8200 | 0.6067 | 0.5202 |
| 0.4458 | 0.89 | 8300 | 0.6087 | 0.5192 |
| 0.4458 | 0.9 | 8400 | 0.6859 | 0.5796 |
| 0.4409 | 0.91 | 8500 | 0.6180 | 0.5131 |
| 0.4409 | 0.92 | 8600 | 0.5945 | 0.4948 |
| 0.4409 | 0.93 | 8700 | 0.5967 | 0.5532 |
| 0.4409 | 0.94 | 8800 | 0.5770 | 0.4961 |
| 0.4409 | 0.95 | 8900 | 0.5809 | 0.5203 |
| 0.4305 | 0.96 | 9000 | 0.5805 | 0.5039 |
| 0.4305 | 0.97 | 9100 | 0.5873 | 0.5188 |
| 0.4305 | 0.98 | 9200 | 0.6277 | 0.5516 |
| 0.4305 | 1.0 | 9300 | 0.5727 | 0.5052 |
| 0.4305 | 1.01 | 9400 | 0.5858 | 0.5123 |
| 0.4264 | 1.02 | 9500 | 0.5692 | 0.4968 |
| 0.4264 | 1.03 | 9600 | 0.5954 | 0.5117 |
| 0.4264 | 1.04 | 9700 | 0.5904 | 0.5076 |
| 0.4264 | 1.05 | 9800 | 0.6046 | 0.5101 |
| 0.4264 | 1.06 | 9900 | 0.5616 | 0.4926 |
| 0.4176 | 1.07 | 10000 | 0.5971 | 0.5368 |
| 0.4176 | 1.08 | 10100 | 0.5706 | 0.4940 |
| 0.4176 | 1.09 | 10200 | 0.5612 | 0.5032 |
| 0.4176 | 1.1 | 10300 | 0.5672 | 0.4944 |
| 0.4176 | 1.11 | 10400 | 0.5915 | 0.5218 |
| 0.4033 | 1.12 | 10500 | 0.5706 | 0.5051 |
| 0.4033 | 1.13 | 10600 | 0.5661 | 0.4934 |
| 0.4033 | 1.15 | 10700 | 0.5724 | 0.4903 |
| 0.4033 | 1.16 | 10800 | 0.5792 | 0.4940 |
| 0.4033 | 1.17 | 10900 | 0.5744 | 0.4911 |
| 0.392 | 1.18 | 11000 | 0.5767 | 0.5162 |
| 0.392 | 1.19 | 11100 | 0.5588 | 0.4835 |
| 0.392 | 1.2 | 11200 | 0.5609 | 0.4922 |
| 0.392 | 1.21 | 11300 | 0.5890 | 0.4914 |
| 0.392 | 1.22 | 11400 | 0.5525 | 0.4897 |
| 0.387 | 1.23 | 11500 | 0.5704 | 0.5051 |
| 0.387 | 1.24 | 11600 | 0.5539 | 0.5014 |
| 0.387 | 1.25 | 11700 | 0.5473 | 0.4882 |
| 0.387 | 1.26 | 11800 | 0.5662 | 0.5004 |
| 0.387 | 1.27 | 11900 | 0.5785 | 0.5220 |
| 0.3956 | 1.28 | 12000 | 0.5990 | 0.5114 |
| 0.3956 | 1.3 | 12100 | 0.5497 | 0.4895 |
| 0.3956 | 1.31 | 12200 | 0.5538 | 0.4895 |
| 0.3956 | 1.32 | 12300 | 0.5652 | 0.4913 |
| 0.3956 | 1.33 | 12400 | 0.5682 | 0.5128 |
| 0.4043 | 1.34 | 12500 | 0.5830 | 0.4999 |
| 0.4043 | 1.35 | 12600 | 0.5686 | 0.4865 |
| 0.4043 | 1.36 | 12700 | 0.5688 | 0.4937 |
| 0.4043 | 1.37 | 12800 | 0.5753 | 0.5034 |
| 0.4043 | 1.38 | 12900 | 0.5898 | 0.4865 |
| 0.3997 | 1.39 | 13000 | 0.5723 | 0.4963 |
| 0.3997 | 1.4 | 13100 | 0.5767 | 0.4986 |
| 0.3997 | 1.41 | 13200 | 0.5960 | 0.5084 |
| 0.3997 | 1.42 | 13300 | 0.5859 | 0.5096 |
| 0.3997 | 1.43 | 13400 | 0.5491 | 0.4784 |
| 0.3997 | 1.45 | 13500 | 0.5636 | 0.5049 |
| 0.3997 | 1.46 | 13600 | 0.5667 | 0.4708 |
| 0.3997 | 1.47 | 13700 | 0.5757 | 0.4862 |
| 0.3997 | 1.48 | 13800 | 0.5444 | 0.4816 |
| 0.3997 | 1.49 | 13900 | 0.5557 | 0.4792 |
| 0.3954 | 1.5 | 14000 | 0.5437 | 0.4810 |
| 0.3954 | 1.51 | 14100 | 0.5489 | 0.4674 |
| 0.3954 | 1.52 | 14200 | 0.5415 | 0.4674 |
| 0.3954 | 1.53 | 14300 | 0.5481 | 0.4902 |
| 0.3954 | 1.54 | 14400 | 0.5474 | 0.4763 |
| 0.3814 | 1.55 | 14500 | 0.5588 | 0.4731 |
| 0.3814 | 1.56 | 14600 | 0.5746 | 0.4820 |
| 0.3814 | 1.57 | 14700 | 0.5676 | 0.4884 |
| 0.3814 | 1.58 | 14800 | 0.5495 | 0.4711 |
| 0.3814 | 1.6 | 14900 | 0.5565 | 0.4782 |
| 0.3877 | 1.61 | 15000 | 0.5671 | 0.5135 |
| 0.3877 | 1.62 | 15100 | 0.5512 | 0.4868 |
| 0.3877 | 1.63 | 15200 | 0.5683 | 0.4650 |
| 0.3877 | 1.64 | 15300 | 0.5427 | 0.4717 |
| 0.3877 | 1.65 | 15400 | 0.5519 | 0.4651 |
| 0.387 | 1.66 | 15500 | 0.5327 | 0.4456 |
| 0.387 | 1.67 | 15600 | 0.5371 | 0.4673 |
| 0.387 | 1.68 | 15700 | 0.5337 | 0.4705 |
| 0.387 | 1.69 | 15800 | 0.5606 | 0.4992 |
| 0.387 | 1.7 | 15900 | 0.5254 | 0.4613 |
| 0.3877 | 1.71 | 16000 | 0.5619 | 0.4882 |
| 0.3877 | 1.72 | 16100 | 0.5212 | 0.4560 |
| 0.3877 | 1.73 | 16200 | 0.5369 | 0.4696 |
| 0.3877 | 1.75 | 16300 | 0.5392 | 0.4677 |
| 0.3877 | 1.76 | 16400 | 0.5353 | 0.4768 |
| 0.3739 | 1.77 | 16500 | 0.5435 | 0.4777 |
| 0.3739 | 1.78 | 16600 | 0.5343 | 0.4884 |
| 0.3739 | 1.79 | 16700 | 0.5309 | 0.4942 |
| 0.3739 | 1.8 | 16800 | 0.5373 | 0.4727 |
| 0.3739 | 1.81 | 16900 | 0.5550 | 0.4686 |
| 0.3884 | 1.82 | 17000 | 0.5486 | 0.4826 |
| 0.3884 | 1.83 | 17100 | 0.5508 | 0.4862 |
| 0.3884 | 1.84 | 17200 | 0.5423 | 0.4855 |
| 0.3884 | 1.85 | 17300 | 0.5478 | 0.4730 |
| 0.3884 | 1.86 | 17400 | 0.5438 | 0.4938 |
| 0.3842 | 1.87 | 17500 | 0.5571 | 0.4818 |
| 0.3842 | 1.88 | 17600 | 0.5402 | 0.4753 |
| 0.3842 | 1.9 | 17700 | 0.5679 | 0.4827 |
| 0.3842 | 1.91 | 17800 | 0.5385 | 0.4642 |
| 0.3842 | 1.92 | 17900 | 0.5519 | 0.4942 |
| 0.3953 | 1.93 | 18000 | 0.5559 | 0.4745 |
| 0.3953 | 1.94 | 18100 | 0.5657 | 0.4963 |
| 0.3953 | 1.95 | 18200 | 0.5296 | 0.4642 |
| 0.3953 | 1.96 | 18300 | 0.5529 | 0.4907 |
| 0.3953 | 1.97 | 18400 | 0.5380 | 0.4536 |
| 0.3745 | 1.98 | 18500 | 0.5276 | 0.4678 |
| 0.3745 | 1.99 | 18600 | 0.5544 | 0.4854 |
| 0.3745 | 2.0 | 18700 | 0.5195 | 0.4535 |
| 0.3745 | 2.01 | 18800 | 0.5165 | 0.4635 |
| 0.3745 | 2.02 | 18900 | 0.5062 | 0.4431 |
| 0.3538 | 2.03 | 19000 | 0.5255 | 0.4509 |
| 0.3538 | 2.04 | 19100 | 0.5125 | 0.4512 |
| 0.3538 | 2.06 | 19200 | 0.5105 | 0.4504 |
| 0.3538 | 2.07 | 19300 | 0.5000 | 0.4490 |
| 0.3538 | 2.08 | 19400 | 0.5150 | 0.4520 |
| 0.356 | 2.09 | 19500 | 0.5053 | 0.4383 |
| 0.356 | 2.1 | 19600 | 0.5085 | 0.4417 |
| 0.356 | 2.11 | 19700 | 0.5229 | 0.4490 |
| 0.356 | 2.12 | 19800 | 0.5326 | 0.4492 |
| 0.356 | 2.13 | 19900 | 0.5139 | 0.4491 |
| 0.3474 | 2.14 | 20000 | 0.5134 | 0.4384 |
| 0.3474 | 2.15 | 20100 | 0.5498 | 0.4606 |
| 0.3474 | 2.16 | 20200 | 0.5324 | 0.4540 |
| 0.3474 | 2.17 | 20300 | 0.5338 | 0.4548 |
| 0.3474 | 2.18 | 20400 | 0.5076 | 0.4425 |
| 0.345 | 2.19 | 20500 | 0.5253 | 0.4550 |
| 0.345 | 2.21 | 20600 | 0.5125 | 0.4618 |
| 0.345 | 2.22 | 20700 | 0.5171 | 0.4487 |
| 0.345 | 2.23 | 20800 | 0.5232 | 0.4464 |
| 0.345 | 2.24 | 20900 | 0.5298 | 0.4588 |
| 0.341 | 2.25 | 21000 | 0.5342 | 0.4576 |
| 0.341 | 2.26 | 21100 | 0.5515 | 0.4678 |
| 0.341 | 2.27 | 21200 | 0.5041 | 0.4495 |
| 0.341 | 2.28 | 21300 | 0.5169 | 0.4473 |
| 0.341 | 2.29 | 21400 | 0.5227 | 0.4494 |
| 0.354 | 2.3 | 21500 | 0.5214 | 0.4458 |
| 0.354 | 2.31 | 21600 | 0.5303 | 0.4587 |
| 0.354 | 2.32 | 21700 | 0.5237 | 0.4597 |
| 0.354 | 2.33 | 21800 | 0.5067 | 0.4460 |
| 0.354 | 2.34 | 21900 | 0.5117 | 0.4560 |
| 0.3333 | 2.36 | 22000 | 0.5104 | 0.4359 |
| 0.3333 | 2.37 | 22100 | 0.5326 | 0.4679 |
| 0.3333 | 2.38 | 22200 | 0.5098 | 0.4510 |
| 0.3333 | 2.39 | 22300 | 0.5044 | 0.4445 |
| 0.3333 | 2.4 | 22400 | 0.5219 | 0.4489 |
| 0.3514 | 2.41 | 22500 | 0.4987 | 0.4433 |
| 0.3514 | 2.42 | 22600 | 0.5009 | 0.4338 |
| 0.3514 | 2.43 | 22700 | 0.5252 | 0.4444 |
| 0.3514 | 2.44 | 22800 | 0.4861 | 0.4269 |
| 0.3514 | 2.45 | 22900 | 0.5157 | 0.4421 |
| 0.3444 | 2.46 | 23000 | 0.5277 | 0.4426 |
| 0.3444 | 2.47 | 23100 | 0.5213 | 0.4378 |
| 0.3444 | 2.48 | 23200 | 0.5172 | 0.4482 |
| 0.3444 | 2.49 | 23300 | 0.5142 | 0.4376 |
| 0.3444 | 2.51 | 23400 | 0.5044 | 0.4231 |
| 0.3536 | 2.52 | 23500 | 0.5268 | 0.4496 |
| 0.3536 | 2.53 | 23600 | 0.5176 | 0.4326 |
| 0.3536 | 2.54 | 23700 | 0.5032 | 0.4296 |
| 0.3536 | 2.55 | 23800 | 0.5211 | 0.4460 |
| 0.3536 | 2.56 | 23900 | 0.5093 | 0.4379 |
| 0.337 | 2.57 | 24000 | 0.4990 | 0.4311 |
| 0.337 | 2.58 | 24100 | 0.4962 | 0.4329 |
| 0.337 | 2.59 | 24200 | 0.5033 | 0.4289 |
| 0.337 | 2.6 | 24300 | 0.5260 | 0.4534 |
| 0.337 | 2.61 | 24400 | 0.5309 | 0.4441 |
| 0.3393 | 2.62 | 24500 | 0.5132 | 0.4346 |
| 0.3393 | 2.63 | 24600 | 0.5189 | 0.4233 |
| 0.3393 | 2.64 | 24700 | 0.5074 | 0.4326 |
| 0.3393 | 2.66 | 24800 | 0.5111 | 0.4254 |
| 0.3393 | 2.67 | 24900 | 0.4933 | 0.4254 |
| 0.3334 | 2.68 | 25000 | 0.5046 | 0.4407 |
| 0.3334 | 2.69 | 25100 | 0.5010 | 0.4404 |
| 0.3334 | 2.7 | 25200 | 0.5045 | 0.4236 |
| 0.3334 | 2.71 | 25300 | 0.4938 | 0.4305 |
| 0.3334 | 2.72 | 25400 | 0.5021 | 0.4383 |
| 0.3366 | 2.73 | 25500 | 0.4953 | 0.4202 |
| 0.3366 | 2.74 | 25600 | 0.4985 | 0.4338 |
| 0.3366 | 2.75 | 25700 | 0.4765 | 0.4161 |
| 0.3366 | 2.76 | 25800 | 0.4873 | 0.4292 |
| 0.3366 | 2.77 | 25900 | 0.4998 | 0.4189 |
| 0.3359 | 2.78 | 26000 | 0.4991 | 0.4248 |
| 0.3359 | 2.79 | 26100 | 0.5012 | 0.4307 |
| 0.3359 | 2.81 | 26200 | 0.5081 | 0.4151 |
| 0.3359 | 2.82 | 26300 | 0.4997 | 0.4305 |
| 0.3359 | 2.83 | 26400 | 0.4969 | 0.4302 |
| 0.3396 | 2.84 | 26500 | 0.4784 | 0.4271 |
| 0.3396 | 2.85 | 26600 | 0.4804 | 0.4149 |
| 0.3396 | 2.86 | 26700 | 0.4900 | 0.4192 |
| 0.3396 | 2.87 | 26800 | 0.5044 | 0.4325 |
| 0.3396 | 2.88 | 26900 | 0.4935 | 0.4376 |
| 0.3356 | 2.89 | 27000 | 0.5007 | 0.4269 |
| 0.3356 | 2.9 | 27100 | 0.4887 | 0.4178 |
| 0.3356 | 2.91 | 27200 | 0.4770 | 0.4170 |
| 0.3356 | 2.92 | 27300 | 0.4847 | 0.4167 |
| 0.3356 | 2.93 | 27400 | 0.4861 | 0.4139 |
| 0.3395 | 2.94 | 27500 | 0.4975 | 0.4291 |
| 0.3395 | 2.95 | 27600 | 0.5056 | 0.4471 |
| 0.3395 | 2.97 | 27700 | 0.5111 | 0.4375 |
| 0.3395 | 2.98 | 27800 | 0.5327 | 0.4577 |
| 0.3395 | 2.99 | 27900 | 0.5067 | 0.4393 |
| 0.3332 | 3.0 | 28000 | 0.4898 | 0.4188 |
| 0.3332 | 3.01 | 28100 | 0.4790 | 0.4093 |
| 0.3332 | 3.02 | 28200 | 0.4828 | 0.4202 |
| 0.3332 | 3.03 | 28300 | 0.4836 | 0.4146 |
| 0.3332 | 3.04 | 28400 | 0.4901 | 0.4242 |
| 0.2984 | 3.05 | 28500 | 0.4772 | 0.4118 |
| 0.2984 | 3.06 | 28600 | 0.5055 | 0.4213 |
| 0.2984 | 3.07 | 28700 | 0.4911 | 0.4100 |
| 0.2984 | 3.08 | 28800 | 0.4737 | 0.4087 |
| 0.2984 | 3.09 | 28900 | 0.4930 | 0.4216 |
| 0.3056 | 3.1 | 29000 | 0.4736 | 0.4109 |
| 0.3056 | 3.12 | 29100 | 0.4863 | 0.4058 |
| 0.3056 | 3.13 | 29200 | 0.4784 | 0.4184 |
| 0.3056 | 3.14 | 29300 | 0.4923 | 0.4240 |
| 0.3056 | 3.15 | 29400 | 0.4846 | 0.4226 |
| 0.2995 | 3.16 | 29500 | 0.4829 | 0.4086 |
| 0.2995 | 3.17 | 29600 | 0.4934 | 0.4240 |
| 0.2995 | 3.18 | 29700 | 0.4893 | 0.4152 |
| 0.2995 | 3.19 | 29800 | 0.4730 | 0.4227 |
| 0.2995 | 3.2 | 29900 | 0.5027 | 0.4330 |
| 0.2926 | 3.21 | 30000 | 0.4903 | 0.4112 |
| 0.2926 | 3.22 | 30100 | 0.4961 | 0.4157 |
| 0.2926 | 3.23 | 30200 | 0.4980 | 0.4269 |
| 0.2926 | 3.24 | 30300 | 0.4896 | 0.4126 |
| 0.2926 | 3.25 | 30400 | 0.4726 | 0.4062 |
| 0.301 | 3.27 | 30500 | 0.4733 | 0.3985 |
| 0.301 | 3.28 | 30600 | 0.4772 | 0.4047 |
| 0.301 | 3.29 | 30700 | 0.4806 | 0.4082 |
| 0.301 | 3.3 | 30800 | 0.4683 | 0.4011 |
| 0.301 | 3.31 | 30900 | 0.4775 | 0.4079 |
| 0.2933 | 3.32 | 31000 | 0.4729 | 0.4083 |
| 0.2933 | 3.33 | 31100 | 0.4628 | 0.4016 |
| 0.2933 | 3.34 | 31200 | 0.4753 | 0.4192 |
| 0.2933 | 3.35 | 31300 | 0.4687 | 0.4185 |
| 0.2933 | 3.36 | 31400 | 0.4806 | 0.4106 |
| 0.2957 | 3.37 | 31500 | 0.4889 | 0.4240 |
| 0.2957 | 3.38 | 31600 | 0.4882 | 0.4182 |
| 0.2957 | 3.39 | 31700 | 0.4798 | 0.4162 |
| 0.2957 | 3.4 | 31800 | 0.4718 | 0.4108 |
| 0.2957 | 3.42 | 31900 | 0.4685 | 0.4101 |
| 0.3039 | 3.43 | 32000 | 0.4816 | 0.4188 |
| 0.3039 | 3.44 | 32100 | 0.4874 | 0.4139 |
| 0.3039 | 3.45 | 32200 | 0.4899 | 0.4115 |
| 0.3039 | 3.46 | 32300 | 0.4852 | 0.4180 |
| 0.3039 | 3.47 | 32400 | 0.5074 | 0.4129 |
| 0.3006 | 3.48 | 32500 | 0.4837 | 0.4076 |
| 0.3006 | 3.49 | 32600 | 0.4927 | 0.4098 |
| 0.3006 | 3.5 | 32700 | 0.4999 | 0.4172 |
| 0.3006 | 3.51 | 32800 | 0.4773 | 0.4194 |
| 0.3006 | 3.52 | 32900 | 0.4859 | 0.4058 |
| 0.3089 | 3.53 | 33000 | 0.4783 | 0.4104 |
| 0.3089 | 3.54 | 33100 | 0.4622 | 0.4020 |
| 0.3089 | 3.55 | 33200 | 0.4840 | 0.4065 |
| 0.3089 | 3.57 | 33300 | 0.4756 | 0.4241 |
| 0.3089 | 3.58 | 33400 | 0.4831 | 0.4170 |
| 0.3061 | 3.59 | 33500 | 0.4794 | 0.4068 |
| 0.3061 | 3.6 | 33600 | 0.4730 | 0.4037 |
| 0.3061 | 3.61 | 33700 | 0.4808 | 0.4138 |
| 0.3061 | 3.62 | 33800 | 0.4924 | 0.4248 |
| 0.3061 | 3.63 | 33900 | 0.4749 | 0.4112 |
| 0.3047 | 3.64 | 34000 | 0.4924 | 0.4326 |
| 0.3047 | 3.65 | 34100 | 0.4745 | 0.4104 |
| 0.3047 | 3.66 | 34200 | 0.4760 | 0.4123 |
| 0.3047 | 3.67 | 34300 | 0.4788 | 0.4066 |
| 0.3047 | 3.68 | 34400 | 0.4627 | 0.4158 |
| 0.3042 | 3.69 | 34500 | 0.4974 | 0.4131 |
| 0.3042 | 3.7 | 34600 | 0.4593 | 0.4063 |
| 0.3042 | 3.72 | 34700 | 0.4549 | 0.3928 |
| 0.3042 | 3.73 | 34800 | 0.4690 | 0.3898 |
| 0.3042 | 3.74 | 34900 | 0.4560 | 0.4007 |
| 0.2963 | 3.75 | 35000 | 0.4606 | 0.3959 |
| 0.2963 | 3.76 | 35100 | 0.4762 | 0.4057 |
| 0.2963 | 3.77 | 35200 | 0.4750 | 0.4034 |
| 0.2963 | 3.78 | 35300 | 0.4772 | 0.4114 |
| 0.2963 | 3.79 | 35400 | 0.4669 | 0.3995 |
| 0.3012 | 3.8 | 35500 | 0.4709 | 0.4090 |
| 0.3012 | 3.81 | 35600 | 0.4722 | 0.4123 |
| 0.3012 | 3.82 | 35700 | 0.4913 | 0.4165 |
| 0.3012 | 3.83 | 35800 | 0.4814 | 0.4063 |
| 0.3012 | 3.84 | 35900 | 0.4869 | 0.4171 |
| 0.3015 | 3.85 | 36000 | 0.4791 | 0.4059 |
| 0.3015 | 3.87 | 36100 | 0.4535 | 0.3976 |
| 0.3015 | 3.88 | 36200 | 0.4706 | 0.4009 |
| 0.3015 | 3.89 | 36300 | 0.4679 | 0.4012 |
| 0.3015 | 3.9 | 36400 | 0.4736 | 0.4096 |
| 0.2965 | 3.91 | 36500 | 0.4756 | 0.4106 |
| 0.2965 | 3.92 | 36600 | 0.4669 | 0.4085 |
| 0.2965 | 3.93 | 36700 | 0.4796 | 0.4054 |
| 0.2965 | 3.94 | 36800 | 0.4583 | 0.3932 |
| 0.2965 | 3.95 | 36900 | 0.4430 | 0.3969 |
| 0.2993 | 3.96 | 37000 | 0.4560 | 0.3914 |
| 0.2993 | 3.97 | 37100 | 0.4739 | 0.4002 |
| 0.2993 | 3.98 | 37200 | 0.4598 | 0.3912 |
| 0.2993 | 3.99 | 37300 | 0.4607 | 0.3907 |
| 0.2993 | 4.0 | 37400 | 0.4709 | 0.3986 |
| 0.2886 | 4.01 | 37500 | 0.4642 | 0.4067 |
| 0.2886 | 4.03 | 37600 | 0.4684 | 0.3984 |
| 0.2886 | 4.04 | 37700 | 0.4690 | 0.3979 |
| 0.2886 | 4.05 | 37800 | 0.4722 | 0.3980 |
| 0.2886 | 4.06 | 37900 | 0.4734 | 0.3927 |
| 0.2534 | 4.07 | 38000 | 0.4724 | 0.3988 |
| 0.2534 | 4.08 | 38100 | 0.4665 | 0.3986 |
| 0.2534 | 4.09 | 38200 | 0.4659 | 0.4036 |
| 0.2534 | 4.1 | 38300 | 0.4694 | 0.3952 |
| 0.2534 | 4.11 | 38400 | 0.4719 | 0.3891 |
| 0.2596 | 4.12 | 38500 | 0.4687 | 0.3994 |
| 0.2596 | 4.13 | 38600 | 0.4705 | 0.3903 |
| 0.2596 | 4.14 | 38700 | 0.4601 | 0.3975 |
| 0.2596 | 4.15 | 38800 | 0.4666 | 0.3971 |
| 0.2596 | 4.16 | 38900 | 0.4772 | 0.3892 |
| 0.2643 | 4.18 | 39000 | 0.4810 | 0.4071 |
| 0.2643 | 4.19 | 39100 | 0.4980 | 0.4167 |
| 0.2643 | 4.2 | 39200 | 0.4657 | 0.3996 |
| 0.2643 | 4.21 | 39300 | 0.4869 | 0.4002 |
| 0.2643 | 4.22 | 39400 | 0.4656 | 0.3913 |
| 0.265 | 4.23 | 39500 | 0.4720 | 0.3947 |
| 0.265 | 4.24 | 39600 | 0.4711 | 0.3970 |
| 0.265 | 4.25 | 39700 | 0.4689 | 0.3933 |
| 0.265 | 4.26 | 39800 | 0.4728 | 0.4017 |
| 0.265 | 4.27 | 39900 | 0.4673 | 0.3847 |
| 0.2644 | 4.28 | 40000 | 0.4636 | 0.3960 |
| 0.2644 | 4.29 | 40100 | 0.4699 | 0.3864 |
| 0.2644 | 4.3 | 40200 | 0.4580 | 0.3874 |
| 0.2644 | 4.31 | 40300 | 0.4763 | 0.3951 |
| 0.2644 | 4.33 | 40400 | 0.4752 | 0.4141 |
| 0.2633 | 4.34 | 40500 | 0.4918 | 0.3994 |
| 0.2633 | 4.35 | 40600 | 0.4783 | 0.4026 |
| 0.2633 | 4.36 | 40700 | 0.4739 | 0.4034 |
| 0.2633 | 4.37 | 40800 | 0.4750 | 0.4000 |
| 0.2633 | 4.38 | 40900 | 0.4608 | 0.3943 |
| 0.2679 | 4.39 | 41000 | 0.4615 | 0.3891 |
| 0.2679 | 4.4 | 41100 | 0.4730 | 0.3984 |
| 0.2679 | 4.41 | 41200 | 0.4728 | 0.4011 |
| 0.2679 | 4.42 | 41300 | 0.4675 | 0.3932 |
| 0.2679 | 4.43 | 41400 | 0.4662 | 0.3929 |
| 0.2682 | 4.44 | 41500 | 0.4490 | 0.3837 |
| 0.2682 | 4.45 | 41600 | 0.4611 | 0.3838 |
| 0.2682 | 4.46 | 41700 | 0.4605 | 0.3945 |
| 0.2682 | 4.48 | 41800 | 0.4730 | 0.3938 |
| 0.2682 | 4.49 | 41900 | 0.4567 | 0.3874 |
| 0.2658 | 4.5 | 42000 | 0.4715 | 0.3869 |
| 0.2658 | 4.51 | 42100 | 0.4514 | 0.3833 |
| 0.2658 | 4.52 | 42200 | 0.4602 | 0.3898 |
| 0.2658 | 4.53 | 42300 | 0.4846 | 0.4022 |
| 0.2658 | 4.54 | 42400 | 0.4474 | 0.3810 |
| 0.2676 | 4.55 | 42500 | 0.4513 | 0.3820 |
| 0.2676 | 4.56 | 42600 | 0.4588 | 0.3928 |
| 0.2676 | 4.57 | 42700 | 0.4601 | 0.3894 |
| 0.2676 | 4.58 | 42800 | 0.4516 | 0.3792 |
| 0.2676 | 4.59 | 42900 | 0.4482 | 0.3848 |
| 0.2693 | 4.6 | 43000 | 0.4695 | 0.4008 |
| 0.2693 | 4.61 | 43100 | 0.4580 | 0.3871 |
| 0.2693 | 4.63 | 43200 | 0.4419 | 0.3857 |
| 0.2693 | 4.64 | 43300 | 0.4534 | 0.3796 |
| 0.2693 | 4.65 | 43400 | 0.4532 | 0.3856 |
| 0.2641 | 4.66 | 43500 | 0.4421 | 0.3809 |
| 0.2641 | 4.67 | 43600 | 0.4400 | 0.3844 |
| 0.2641 | 4.68 | 43700 | 0.4515 | 0.3833 |
| 0.2641 | 4.69 | 43800 | 0.4462 | 0.3808 |
| 0.2641 | 4.7 | 43900 | 0.4741 | 0.3926 |
| 0.2626 | 4.71 | 44000 | 0.4542 | 0.3931 |
| 0.2626 | 4.72 | 44100 | 0.4555 | 0.3885 |
| 0.2626 | 4.73 | 44200 | 0.4505 | 0.3845 |
| 0.2626 | 4.74 | 44300 | 0.4593 | 0.3871 |
| 0.2626 | 4.75 | 44400 | 0.4359 | 0.3830 |
| 0.2648 | 4.76 | 44500 | 0.4387 | 0.3736 |
| 0.2648 | 4.78 | 44600 | 0.4529 | 0.3807 |
| 0.2648 | 4.79 | 44700 | 0.4566 | 0.3837 |
| 0.2648 | 4.8 | 44800 | 0.4557 | 0.4067 |
| 0.2648 | 4.81 | 44900 | 0.4609 | 0.3852 |
| 0.2603 | 4.82 | 45000 | 0.4667 | 0.4005 |
| 0.2603 | 4.83 | 45100 | 0.4666 | 0.3836 |
| 0.2603 | 4.84 | 45200 | 0.4775 | 0.3946 |
| 0.2603 | 4.85 | 45300 | 0.4701 | 0.3925 |
| 0.2603 | 4.86 | 45400 | 0.4579 | 0.3889 |
| 0.2626 | 4.87 | 45500 | 0.4516 | 0.3884 |
| 0.2626 | 4.88 | 45600 | 0.4605 | 0.3878 |
| 0.2626 | 4.89 | 45700 | 0.4576 | 0.3802 |
| 0.2626 | 4.9 | 45800 | 0.4553 | 0.3780 |
| 0.2626 | 4.91 | 45900 | 0.4336 | 0.3752 |
| 0.2602 | 4.93 | 46000 | 0.4419 | 0.3881 |
| 0.2602 | 4.94 | 46100 | 0.4601 | 0.3843 |
| 0.2602 | 4.95 | 46200 | 0.4437 | 0.3956 |
| 0.2602 | 4.96 | 46300 | 0.4524 | 0.3844 |
| 0.2602 | 4.97 | 46400 | 0.4709 | 0.4031 |
| 0.2609 | 4.98 | 46500 | 0.4500 | 0.3872 |
| 0.2609 | 4.99 | 46600 | 0.4366 | 0.3846 |
| 0.2609 | 5.0 | 46700 | 0.4653 | 0.3884 |
| 0.2609 | 5.01 | 46800 | 0.4602 | 0.3932 |
| 0.2609 | 5.02 | 46900 | 0.4668 | 0.3854 |
| 0.2472 | 5.03 | 47000 | 0.4616 | 0.3891 |
| 0.2472 | 5.04 | 47100 | 0.4543 | 0.3836 |
| 0.2472 | 5.05 | 47200 | 0.4526 | 0.3822 |
| 0.2472 | 5.06 | 47300 | 0.4539 | 0.3741 |
| 0.2472 | 5.07 | 47400 | 0.4776 | 0.3818 |
| 0.2278 | 5.09 | 47500 | 0.4771 | 0.3794 |
| 0.2278 | 5.1 | 47600 | 0.4662 | 0.3831 |
| 0.2278 | 5.11 | 47700 | 0.4558 | 0.4032 |
| 0.2278 | 5.12 | 47800 | 0.4904 | 0.3918 |
| 0.2278 | 5.13 | 47900 | 0.4765 | 0.3890 |
| 0.2311 | 5.14 | 48000 | 0.4674 | 0.3882 |
| 0.2311 | 5.15 | 48100 | 0.4609 | 0.3947 |
| 0.2311 | 5.16 | 48200 | 0.4588 | 0.3837 |
| 0.2311 | 5.17 | 48300 | 0.4827 | 0.3845 |
| 0.2311 | 5.18 | 48400 | 0.4711 | 0.3839 |
| 0.229 | 5.19 | 48500 | 0.4583 | 0.3873 |
| 0.229 | 5.2 | 48600 | 0.4800 | 0.3858 |
| 0.229 | 5.21 | 48700 | 0.4611 | 0.3800 |
| 0.229 | 5.22 | 48800 | 0.4504 | 0.3889 |
| 0.229 | 5.24 | 48900 | 0.4569 | 0.3761 |
| 0.2313 | 5.25 | 49000 | 0.4732 | 0.3915 |
| 0.2313 | 5.26 | 49100 | 0.4728 | 0.3832 |
| 0.2313 | 5.27 | 49200 | 0.4667 | 0.3815 |
| 0.2313 | 5.28 | 49300 | 0.4912 | 0.3856 |
| 0.2313 | 5.29 | 49400 | 0.4790 | 0.3946 |
| 0.2266 | 5.3 | 49500 | 0.4597 | 0.3763 |
| 0.2266 | 5.31 | 49600 | 0.4580 | 0.3778 |
| 0.2266 | 5.32 | 49700 | 0.4439 | 0.3721 |
| 0.2266 | 5.33 | 49800 | 0.4611 | 0.3704 |
| 0.2266 | 5.34 | 49900 | 0.4599 | 0.3769 |
| 0.235 | 5.35 | 50000 | 0.4543 | 0.3808 |
| 0.235 | 5.36 | 50100 | 0.4555 | 0.3773 |
| 0.235 | 5.37 | 50200 | 0.4525 | 0.3815 |
| 0.235 | 5.39 | 50300 | 0.4557 | 0.3814 |
| 0.235 | 5.4 | 50400 | 0.4604 | 0.3754 |
| 0.2299 | 5.41 | 50500 | 0.4658 | 0.3770 |
| 0.2299 | 5.42 | 50600 | 0.4658 | 0.3884 |
| 0.2299 | 5.43 | 50700 | 0.4701 | 0.3919 |
| 0.2299 | 5.44 | 50800 | 0.4495 | 0.3818 |
| 0.2299 | 5.45 | 50900 | 0.4703 | 0.3886 |
| 0.2307 | 5.46 | 51000 | 0.4395 | 0.3743 |
| 0.2307 | 5.47 | 51100 | 0.4487 | 0.3751 |
| 0.2307 | 5.48 | 51200 | 0.4355 | 0.3733 |
| 0.2307 | 5.49 | 51300 | 0.4622 | 0.3811 |
| 0.2307 | 5.5 | 51400 | 0.4443 | 0.3801 |
| 0.2383 | 5.51 | 51500 | 0.4411 | 0.3743 |
| 0.2383 | 5.52 | 51600 | 0.4438 | 0.3778 |
| 0.2383 | 5.54 | 51700 | 0.4559 | 0.3784 |
| 0.2383 | 5.55 | 51800 | 0.4309 | 0.3656 |
| 0.2383 | 5.56 | 51900 | 0.4455 | 0.3660 |
| 0.23 | 5.57 | 52000 | 0.4436 | 0.3598 |
| 0.23 | 5.58 | 52100 | 0.4344 | 0.3685 |
| 0.23 | 5.59 | 52200 | 0.4282 | 0.3690 |
| 0.23 | 5.6 | 52300 | 0.4464 | 0.3800 |
| 0.23 | 5.61 | 52400 | 0.4458 | 0.3909 |
| 0.2305 | 5.62 | 52500 | 0.4483 | 0.3756 |
| 0.2305 | 5.63 | 52600 | 0.4547 | 0.3785 |
| 0.2305 | 5.64 | 52700 | 0.4671 | 0.3820 |
| 0.2305 | 5.65 | 52800 | 0.4449 | 0.3658 |
| 0.2305 | 5.66 | 52900 | 0.4596 | 0.3716 |
| 0.2237 | 5.67 | 53000 | 0.4399 | 0.3669 |
| 0.2237 | 5.69 | 53100 | 0.4410 | 0.3719 |
| 0.2237 | 5.7 | 53200 | 0.4574 | 0.3619 |
| 0.2237 | 5.71 | 53300 | 0.4443 | 0.3690 |
| 0.2237 | 5.72 | 53400 | 0.4381 | 0.3678 |
| 0.2337 | 5.73 | 53500 | 0.4490 | 0.3687 |
| 0.2337 | 5.74 | 53600 | 0.4427 | 0.3752 |
| 0.2337 | 5.75 | 53700 | 0.4423 | 0.3858 |
| 0.2337 | 5.76 | 53800 | 0.4702 | 0.3825 |
| 0.2337 | 5.77 | 53900 | 0.4724 | 0.3800 |
| 0.23 | 5.78 | 54000 | 0.4476 | 0.3827 |
| 0.23 | 5.79 | 54100 | 0.4508 | 0.3919 |
| 0.23 | 5.8 | 54200 | 0.4564 | 0.3788 |
| 0.23 | 5.81 | 54300 | 0.4602 | 0.3888 |
| 0.23 | 5.82 | 54400 | 0.4538 | 0.3732 |
| 0.2334 | 5.84 | 54500 | 0.4500 | 0.3808 |
| 0.2334 | 5.85 | 54600 | 0.4475 | 0.3705 |
| 0.2334 | 5.86 | 54700 | 0.4415 | 0.3772 |
| 0.2334 | 5.87 | 54800 | 0.4515 | 0.3771 |
| 0.2334 | 5.88 | 54900 | 0.4410 | 0.3677 |
| 0.2259 | 5.89 | 55000 | 0.4555 | 0.3702 |
| 0.2259 | 5.9 | 55100 | 0.4509 | 0.3894 |
| 0.2259 | 5.91 | 55200 | 0.4472 | 0.3692 |
| 0.2259 | 5.92 | 55300 | 0.4438 | 0.3754 |
| 0.2259 | 5.93 | 55400 | 0.4399 | 0.3698 |
| 0.2289 | 5.94 | 55500 | 0.4496 | 0.3753 |
| 0.2289 | 5.95 | 55600 | 0.4506 | 0.3752 |
| 0.2289 | 5.96 | 55700 | 0.4482 | 0.3766 |
| 0.2289 | 5.97 | 55800 | 0.4415 | 0.3772 |
| 0.2289 | 5.98 | 55900 | 0.4447 | 0.3750 |
| 0.2281 | 6.0 | 56000 | 0.4566 | 0.3842 |
| 0.2281 | 6.01 | 56100 | 0.4694 | 0.3774 |
| 0.2281 | 6.02 | 56200 | 0.4454 | 0.3788 |
| 0.2281 | 6.03 | 56300 | 0.4676 | 0.3718 |
| 0.2281 | 6.04 | 56400 | 0.4650 | 0.3751 |
| 0.1979 | 6.05 | 56500 | 0.4601 | 0.3765 |
| 0.1979 | 6.06 | 56600 | 0.4647 | 0.3840 |
| 0.1979 | 6.07 | 56700 | 0.4782 | 0.3756 |
| 0.1979 | 6.08 | 56800 | 0.4709 | 0.3736 |
| 0.1979 | 6.09 | 56900 | 0.4707 | 0.3734 |
| 0.1923 | 6.1 | 57000 | 0.4704 | 0.3751 |
| 0.1923 | 6.11 | 57100 | 0.4542 | 0.3721 |
| 0.1923 | 6.12 | 57200 | 0.4542 | 0.3735 |
| 0.1923 | 6.13 | 57300 | 0.4587 | 0.3804 |
| 0.1923 | 6.15 | 57400 | 0.4428 | 0.3687 |
| 0.2012 | 6.16 | 57500 | 0.4456 | 0.3748 |
| 0.2012 | 6.17 | 57600 | 0.4578 | 0.3762 |
| 0.2012 | 6.18 | 57700 | 0.4699 | 0.3722 |
| 0.2012 | 6.19 | 57800 | 0.4499 | 0.3756 |
| 0.2012 | 6.2 | 57900 | 0.4633 | 0.3680 |
| 0.1951 | 6.21 | 58000 | 0.4548 | 0.3712 |
| 0.1951 | 6.22 | 58100 | 0.4520 | 0.3759 |
| 0.1951 | 6.23 | 58200 | 0.4458 | 0.3616 |
| 0.1951 | 6.24 | 58300 | 0.4307 | 0.3637 |
| 0.1951 | 6.25 | 58400 | 0.4546 | 0.3621 |
| 0.1967 | 6.26 | 58500 | 0.4459 | 0.3623 |
| 0.1967 | 6.27 | 58600 | 0.4535 | 0.3690 |
| 0.1967 | 6.28 | 58700 | 0.4574 | 0.3771 |
| 0.1967 | 6.3 | 58800 | 0.4493 | 0.3744 |
| 0.1967 | 6.31 | 58900 | 0.4494 | 0.3769 |
| 0.1998 | 6.32 | 59000 | 0.4529 | 0.3644 |
| 0.1998 | 6.33 | 59100 | 0.4416 | 0.3662 |
| 0.1998 | 6.34 | 59200 | 0.4468 | 0.3785 |
| 0.1998 | 6.35 | 59300 | 0.4377 | 0.3664 |
| 0.1998 | 6.36 | 59400 | 0.4647 | 0.3755 |
| 0.2009 | 6.37 | 59500 | 0.4700 | 0.3824 |
| 0.2009 | 6.38 | 59600 | 0.4488 | 0.3685 |
| 0.2009 | 6.39 | 59700 | 0.4649 | 0.3804 |
| 0.2009 | 6.4 | 59800 | 0.4389 | 0.3689 |
| 0.2009 | 6.41 | 59900 | 0.4456 | 0.3531 |
| 0.2007 | 6.42 | 60000 | 0.4572 | 0.3658 |
| 0.2007 | 6.43 | 60100 | 0.4464 | 0.3669 |
| 0.2007 | 6.45 | 60200 | 0.4666 | 0.3711 |
| 0.2007 | 6.46 | 60300 | 0.4399 | 0.3660 |
| 0.2007 | 6.47 | 60400 | 0.4445 | 0.3631 |
| 0.2005 | 6.48 | 60500 | 0.4450 | 0.3621 |
| 0.2005 | 6.49 | 60600 | 0.4346 | 0.3571 |
| 0.2005 | 6.5 | 60700 | 0.4358 | 0.3581 |
| 0.2005 | 6.51 | 60800 | 0.4344 | 0.3646 |
| 0.2005 | 6.52 | 60900 | 0.4377 | 0.3621 |
| 0.2038 | 6.53 | 61000 | 0.4262 | 0.3570 |
| 0.2038 | 6.54 | 61100 | 0.4269 | 0.3614 |
| 0.2038 | 6.55 | 61200 | 0.4297 | 0.3592 |
| 0.2038 | 6.56 | 61300 | 0.4433 | 0.3682 |
| 0.2038 | 6.57 | 61400 | 0.4474 | 0.3644 |
| 0.199 | 6.58 | 61500 | 0.4464 | 0.3678 |
| 0.199 | 6.6 | 61600 | 0.4397 | 0.3562 |
| 0.199 | 6.61 | 61700 | 0.4415 | 0.3612 |
| 0.199 | 6.62 | 61800 | 0.4362 | 0.3601 |
| 0.199 | 6.63 | 61900 | 0.4442 | 0.3623 |
| 0.1995 | 6.64 | 62000 | 0.4558 | 0.3662 |
| 0.1995 | 6.65 | 62100 | 0.4477 | 0.3647 |
| 0.1995 | 6.66 | 62200 | 0.4542 | 0.3699 |
| 0.1995 | 6.67 | 62300 | 0.4411 | 0.3632 |
| 0.1995 | 6.68 | 62400 | 0.4408 | 0.3658 |
| 0.2014 | 6.69 | 62500 | 0.4426 | 0.3691 |
| 0.2014 | 6.7 | 62600 | 0.4246 | 0.3645 |
| 0.2014 | 6.71 | 62700 | 0.4466 | 0.3676 |
| 0.2014 | 6.72 | 62800 | 0.4493 | 0.3566 |
| 0.2014 | 6.73 | 62900 | 0.4336 | 0.3621 |
| 0.2015 | 6.75 | 63000 | 0.4367 | 0.3604 |
| 0.2015 | 6.76 | 63100 | 0.4424 | 0.3754 |
| 0.2015 | 6.77 | 63200 | 0.4679 | 0.3733 |
| 0.2015 | 6.78 | 63300 | 0.4483 | 0.3752 |
| 0.2015 | 6.79 | 63400 | 0.4746 | 0.3822 |
| 0.2048 | 6.8 | 63500 | 0.4340 | 0.3731 |
| 0.2048 | 6.81 | 63600 | 0.4346 | 0.3631 |
| 0.2048 | 6.82 | 63700 | 0.4525 | 0.3680 |
| 0.2048 | 6.83 | 63800 | 0.4360 | 0.3641 |
| 0.2048 | 6.84 | 63900 | 0.4299 | 0.3558 |
| 0.2017 | 6.85 | 64000 | 0.4370 | 0.3533 |
| 0.2017 | 6.86 | 64100 | 0.4293 | 0.3617 |
| 0.2017 | 6.87 | 64200 | 0.4431 | 0.3660 |
| 0.2017 | 6.88 | 64300 | 0.4362 | 0.3688 |
| 0.2017 | 6.9 | 64400 | 0.4507 | 0.3648 |
| 0.2045 | 6.91 | 64500 | 0.4439 | 0.3613 |
| 0.2045 | 6.92 | 64600 | 0.4249 | 0.3493 |
| 0.2045 | 6.93 | 64700 | 0.4362 | 0.3612 |
| 0.2045 | 6.94 | 64800 | 0.4336 | 0.3585 |
| 0.2045 | 6.95 | 64900 | 0.4387 | 0.3568 |
| 0.1977 | 6.96 | 65000 | 0.4313 | 0.3542 |
| 0.1977 | 6.97 | 65100 | 0.4287 | 0.3552 |
| 0.1977 | 6.98 | 65200 | 0.4372 | 0.3586 |
| 0.1977 | 6.99 | 65300 | 0.4378 | 0.3629 |
| 0.1977 | 7.0 | 65400 | 0.4518 | 0.3640 |
| 0.1971 | 7.01 | 65500 | 0.4480 | 0.3557 |
| 0.1971 | 7.02 | 65600 | 0.4530 | 0.3560 |
| 0.1971 | 7.03 | 65700 | 0.4581 | 0.3582 |
| 0.1971 | 7.04 | 65800 | 0.4492 | 0.3543 |
| 0.1971 | 7.06 | 65900 | 0.4448 | 0.3608 |
| 0.1672 | 7.07 | 66000 | 0.4469 | 0.3543 |
| 0.1672 | 7.08 | 66100 | 0.4262 | 0.3488 |
| 0.1672 | 7.09 | 66200 | 0.4289 | 0.3570 |
| 0.1672 | 7.1 | 66300 | 0.4455 | 0.3545 |
| 0.1672 | 7.11 | 66400 | 0.4449 | 0.3563 |
| 0.169 | 7.12 | 66500 | 0.4555 | 0.3565 |
| 0.169 | 7.13 | 66600 | 0.4432 | 0.3656 |
| 0.169 | 7.14 | 66700 | 0.4399 | 0.3610 |
| 0.169 | 7.15 | 66800 | 0.4383 | 0.3554 |
| 0.169 | 7.16 | 66900 | 0.4376 | 0.3536 |
| 0.1724 | 7.17 | 67000 | 0.4383 | 0.3572 |
| 0.1724 | 7.18 | 67100 | 0.4452 | 0.3535 |
| 0.1724 | 7.19 | 67200 | 0.4610 | 0.3668 |
| 0.1724 | 7.21 | 67300 | 0.4534 | 0.3546 |
| 0.1724 | 7.22 | 67400 | 0.4506 | 0.3604 |
| 0.1729 | 7.23 | 67500 | 0.4463 | 0.3507 |
| 0.1729 | 7.24 | 67600 | 0.4440 | 0.3630 |
| 0.1729 | 7.25 | 67700 | 0.4361 | 0.3550 |
| 0.1729 | 7.26 | 67800 | 0.4397 | 0.3643 |
| 0.1729 | 7.27 | 67900 | 0.4328 | 0.3548 |
| 0.1736 | 7.28 | 68000 | 0.4546 | 0.3614 |
| 0.1736 | 7.29 | 68100 | 0.4506 | 0.3558 |
| 0.1736 | 7.3 | 68200 | 0.4361 | 0.3513 |
| 0.1736 | 7.31 | 68300 | 0.4223 | 0.3500 |
| 0.1736 | 7.32 | 68400 | 0.4474 | 0.3497 |
| 0.1733 | 7.33 | 68500 | 0.4303 | 0.3549 |
| 0.1733 | 7.34 | 68600 | 0.4265 | 0.3483 |
| 0.1733 | 7.36 | 68700 | 0.4339 | 0.3558 |
| 0.1733 | 7.37 | 68800 | 0.4266 | 0.3491 |
| 0.1733 | 7.38 | 68900 | 0.4423 | 0.3565 |
| 0.1764 | 7.39 | 69000 | 0.4410 | 0.3554 |
| 0.1764 | 7.4 | 69100 | 0.4482 | 0.3703 |
| 0.1764 | 7.41 | 69200 | 0.4480 | 0.3641 |
| 0.1764 | 7.42 | 69300 | 0.4361 | 0.3500 |
| 0.1764 | 7.43 | 69400 | 0.4399 | 0.3632 |
| 0.1711 | 7.44 | 69500 | 0.4383 | 0.3591 |
| 0.1711 | 7.45 | 69600 | 0.4523 | 0.3636 |
| 0.1711 | 7.46 | 69700 | 0.4388 | 0.3502 |
| 0.1711 | 7.47 | 69800 | 0.4305 | 0.3565 |
| 0.1711 | 7.48 | 69900 | 0.4290 | 0.3538 |
| 0.1748 | 7.49 | 70000 | 0.4359 | 0.3511 |
| 0.1748 | 7.51 | 70100 | 0.4315 | 0.3460 |
| 0.1748 | 7.52 | 70200 | 0.4268 | 0.3555 |
| 0.1748 | 7.53 | 70300 | 0.4267 | 0.3455 |
| 0.1748 | 7.54 | 70400 | 0.4359 | 0.3517 |
| 0.1739 | 7.55 | 70500 | 0.4299 | 0.3491 |
| 0.1739 | 7.56 | 70600 | 0.4423 | 0.3409 |
| 0.1739 | 7.57 | 70700 | 0.4251 | 0.3420 |
| 0.1739 | 7.58 | 70800 | 0.4300 | 0.3414 |
| 0.1739 | 7.59 | 70900 | 0.4349 | 0.3422 |
| 0.1763 | 7.6 | 71000 | 0.4328 | 0.3418 |
| 0.1763 | 7.61 | 71100 | 0.4313 | 0.3452 |
| 0.1763 | 7.62 | 71200 | 0.4240 | 0.3534 |
| 0.1763 | 7.63 | 71300 | 0.4274 | 0.3474 |
| 0.1763 | 7.64 | 71400 | 0.4304 | 0.3467 |
| 0.171 | 7.66 | 71500 | 0.4331 | 0.3510 |
| 0.171 | 7.67 | 71600 | 0.4263 | 0.3478 |
| 0.171 | 7.68 | 71700 | 0.4301 | 0.3447 |
| 0.171 | 7.69 | 71800 | 0.4046 | 0.3452 |
| 0.171 | 7.7 | 71900 | 0.4300 | 0.3528 |
| 0.1792 | 7.71 | 72000 | 0.4253 | 0.3492 |
| 0.1792 | 7.72 | 72100 | 0.4296 | 0.3491 |
| 0.1792 | 7.73 | 72200 | 0.4118 | 0.3451 |
| 0.1792 | 7.74 | 72300 | 0.4348 | 0.3345 |
| 0.1792 | 7.75 | 72400 | 0.4283 | 0.3447 |
| 0.1801 | 7.76 | 72500 | 0.4232 | 0.3449 |
| 0.1801 | 7.77 | 72600 | 0.4491 | 0.3486 |
| 0.1801 | 7.78 | 72700 | 0.4261 | 0.3343 |
| 0.1801 | 7.79 | 72800 | 0.4382 | 0.3455 |
| 0.1801 | 7.81 | 72900 | 0.4301 | 0.3415 |
| 0.1731 | 7.82 | 73000 | 0.4236 | 0.3438 |
| 0.1731 | 7.83 | 73100 | 0.4257 | 0.3419 |
| 0.1731 | 7.84 | 73200 | 0.4368 | 0.3410 |
| 0.1731 | 7.85 | 73300 | 0.4207 | 0.3398 |
| 0.1731 | 7.86 | 73400 | 0.4118 | 0.3418 |
| 0.1748 | 7.87 | 73500 | 0.4357 | 0.3429 |
| 0.1748 | 7.88 | 73600 | 0.4277 | 0.3452 |
| 0.1748 | 7.89 | 73700 | 0.4173 | 0.3476 |
| 0.1748 | 7.9 | 73800 | 0.4191 | 0.3478 |
| 0.1748 | 7.91 | 73900 | 0.4197 | 0.3457 |
| 0.1745 | 7.92 | 74000 | 0.4197 | 0.3436 |
| 0.1745 | 7.93 | 74100 | 0.4253 | 0.3512 |
| 0.1745 | 7.94 | 74200 | 0.4217 | 0.3463 |
| 0.1745 | 7.95 | 74300 | 0.4305 | 0.3473 |
| 0.1745 | 7.97 | 74400 | 0.4215 | 0.3507 |
| 0.1743 | 7.98 | 74500 | 0.4127 | 0.3408 |
| 0.1743 | 7.99 | 74600 | 0.4191 | 0.3468 |
| 0.1743 | 8.0 | 74700 | 0.4381 | 0.3491 |
| 0.1743 | 8.01 | 74800 | 0.4510 | 0.3477 |
| 0.1743 | 8.02 | 74900 | 0.4482 | 0.3471 |
| 0.1588 | 8.03 | 75000 | 0.4471 | 0.3430 |
| 0.1588 | 8.04 | 75100 | 0.4296 | 0.3393 |
| 0.1588 | 8.05 | 75200 | 0.4480 | 0.3398 |
| 0.1588 | 8.06 | 75300 | 0.4302 | 0.3452 |
| 0.1588 | 8.07 | 75400 | 0.4410 | 0.3431 |
| 0.144 | 8.08 | 75500 | 0.4263 | 0.3455 |
| 0.144 | 8.09 | 75600 | 0.4523 | 0.3495 |
| 0.144 | 8.1 | 75700 | 0.4455 | 0.3511 |
| 0.144 | 8.12 | 75800 | 0.4379 | 0.3445 |
| 0.144 | 8.13 | 75900 | 0.4418 | 0.3411 |
| 0.1483 | 8.14 | 76000 | 0.4491 | 0.3463 |
| 0.1483 | 8.15 | 76100 | 0.4386 | 0.3467 |
| 0.1483 | 8.16 | 76200 | 0.4327 | 0.3524 |
| 0.1483 | 8.17 | 76300 | 0.4360 | 0.3613 |
| 0.1483 | 8.18 | 76400 | 0.4352 | 0.3498 |
| 0.1541 | 8.19 | 76500 | 0.4376 | 0.3414 |
| 0.1541 | 8.2 | 76600 | 0.4408 | 0.3464 |
| 0.1541 | 8.21 | 76700 | 0.4415 | 0.3445 |
| 0.1541 | 8.22 | 76800 | 0.4455 | 0.3482 |
| 0.1541 | 8.23 | 76900 | 0.4542 | 0.3415 |
| 0.1479 | 8.24 | 77000 | 0.4462 | 0.3426 |
| 0.1479 | 8.25 | 77100 | 0.4460 | 0.3413 |
| 0.1479 | 8.27 | 77200 | 0.4434 | 0.3375 |
| 0.1479 | 8.28 | 77300 | 0.4397 | 0.3473 |
| 0.1479 | 8.29 | 77400 | 0.4379 | 0.3484 |
| 0.1479 | 8.3 | 77500 | 0.4441 | 0.3494 |
| 0.1479 | 8.31 | 77600 | 0.4301 | 0.3466 |
| 0.1479 | 8.32 | 77700 | 0.4420 | 0.3474 |
| 0.1479 | 8.33 | 77800 | 0.4520 | 0.3589 |
| 0.1479 | 8.34 | 77900 | 0.4283 | 0.3482 |
| 0.1531 | 8.35 | 78000 | 0.4325 | 0.3446 |
| 0.1531 | 8.36 | 78100 | 0.4380 | 0.3469 |
| 0.1531 | 8.37 | 78200 | 0.4463 | 0.3503 |
| 0.1531 | 8.38 | 78300 | 0.4479 | 0.3499 |
| 0.1531 | 8.39 | 78400 | 0.4477 | 0.3529 |
| 0.1507 | 8.4 | 78500 | 0.4709 | 0.3551 |
| 0.1507 | 8.42 | 78600 | 0.4533 | 0.3531 |
| 0.1507 | 8.43 | 78700 | 0.4507 | 0.3522 |
| 0.1507 | 8.44 | 78800 | 0.4562 | 0.3583 |
| 0.1507 | 8.45 | 78900 | 0.4421 | 0.3577 |
| 0.1545 | 8.46 | 79000 | 0.4485 | 0.3547 |
| 0.1545 | 8.47 | 79100 | 0.4389 | 0.3465 |
| 0.1545 | 8.48 | 79200 | 0.4397 | 0.3502 |
| 0.1545 | 8.49 | 79300 | 0.4403 | 0.3471 |
| 0.1545 | 8.5 | 79400 | 0.4394 | 0.3482 |
| 0.153 | 8.51 | 79500 | 0.4393 | 0.3474 |
| 0.153 | 8.52 | 79600 | 0.4343 | 0.3495 |
| 0.153 | 8.53 | 79700 | 0.4395 | 0.3539 |
| 0.153 | 8.54 | 79800 | 0.4497 | 0.3535 |
| 0.153 | 8.55 | 79900 | 0.4443 | 0.3540 |
| 0.1558 | 8.57 | 80000 | 0.4495 | 0.3554 |
| 0.1558 | 8.58 | 80100 | 0.4387 | 0.3460 |
| 0.1558 | 8.59 | 80200 | 0.4378 | 0.3520 |
| 0.1558 | 8.6 | 80300 | 0.4446 | 0.3527 |
| 0.1558 | 8.61 | 80400 | 0.4513 | 0.3508 |
| 0.1527 | 8.62 | 80500 | 0.4396 | 0.3537 |
| 0.1527 | 8.63 | 80600 | 0.4405 | 0.3507 |
| 0.1527 | 8.64 | 80700 | 0.4398 | 0.3450 |
| 0.1527 | 8.65 | 80800 | 0.4458 | 0.3508 |
| 0.1527 | 8.66 | 80900 | 0.4380 | 0.3465 |
| 0.1522 | 8.67 | 81000 | 0.4373 | 0.3482 |
| 0.1522 | 8.68 | 81100 | 0.4363 | 0.3410 |
| 0.1522 | 8.69 | 81200 | 0.4290 | 0.3447 |
| 0.1522 | 8.7 | 81300 | 0.4409 | 0.3515 |
| 0.1522 | 8.72 | 81400 | 0.4363 | 0.3433 |
| 0.1502 | 8.73 | 81500 | 0.4313 | 0.3429 |
| 0.1502 | 8.74 | 81600 | 0.4263 | 0.3451 |
| 0.1502 | 8.75 | 81700 | 0.4297 | 0.3452 |
| 0.1502 | 8.76 | 81800 | 0.4449 | 0.3411 |
| 0.1502 | 8.77 | 81900 | 0.4465 | 0.3455 |
| 0.151 | 8.78 | 82000 | 0.4274 | 0.3425 |
| 0.151 | 8.79 | 82100 | 0.4525 | 0.3532 |
| 0.151 | 8.8 | 82200 | 0.4282 | 0.3502 |
| 0.151 | 8.81 | 82300 | 0.4189 | 0.3507 |
| 0.151 | 8.82 | 82400 | 0.4379 | 0.3451 |
| 0.1529 | 8.83 | 82500 | 0.4378 | 0.3419 |
| 0.1529 | 8.84 | 82600 | 0.4283 | 0.3392 |
| 0.1529 | 8.85 | 82700 | 0.4359 | 0.3399 |
| 0.1529 | 8.87 | 82800 | 0.4308 | 0.3358 |
| 0.1529 | 8.88 | 82900 | 0.4296 | 0.3335 |
| 0.151 | 8.89 | 83000 | 0.4387 | 0.3372 |
| 0.151 | 8.9 | 83100 | 0.4335 | 0.3420 |
| 0.151 | 8.91 | 83200 | 0.4329 | 0.3374 |
| 0.151 | 8.92 | 83300 | 0.4353 | 0.3404 |
| 0.151 | 8.93 | 83400 | 0.4384 | 0.3447 |
| 0.1522 | 8.94 | 83500 | 0.4444 | 0.3353 |
| 0.1522 | 8.95 | 83600 | 0.4413 | 0.3481 |
| 0.1522 | 8.96 | 83700 | 0.4247 | 0.3474 |
| 0.1522 | 8.97 | 83800 | 0.4197 | 0.3386 |
| 0.1522 | 8.98 | 83900 | 0.4216 | 0.3384 |
| 0.1511 | 8.99 | 84000 | 0.4159 | 0.3396 |
| 0.1511 | 9.0 | 84100 | 0.4213 | 0.3416 |
| 0.1511 | 9.01 | 84200 | 0.4399 | 0.3379 |
| 0.1511 | 9.03 | 84300 | 0.4318 | 0.3437 |
| 0.1511 | 9.04 | 84400 | 0.4356 | 0.3371 |
| 0.1336 | 9.05 | 84500 | 0.4403 | 0.3373 |
| 0.1336 | 9.06 | 84600 | 0.4545 | 0.3381 |
| 0.1336 | 9.07 | 84700 | 0.4313 | 0.3331 |
| 0.1336 | 9.08 | 84800 | 0.4257 | 0.3360 |
| 0.1336 | 9.09 | 84900 | 0.4285 | 0.3372 |
| 0.1315 | 9.1 | 85000 | 0.4378 | 0.3332 |
| 0.1315 | 9.11 | 85100 | 0.4352 | 0.3282 |
| 0.1315 | 9.12 | 85200 | 0.4360 | 0.3339 |
| 0.1315 | 9.13 | 85300 | 0.4404 | 0.3365 |
| 0.1315 | 9.14 | 85400 | 0.4345 | 0.3356 |
| 0.1272 | 9.15 | 85500 | 0.4468 | 0.3375 |
| 0.1272 | 9.16 | 85600 | 0.4331 | 0.3363 |
| 0.1272 | 9.18 | 85700 | 0.4330 | 0.3309 |
| 0.1272 | 9.19 | 85800 | 0.4424 | 0.3301 |
| 0.1272 | 9.2 | 85900 | 0.4520 | 0.3326 |
| 0.1289 | 9.21 | 86000 | 0.4421 | 0.3326 |
| 0.1289 | 9.22 | 86100 | 0.4480 | 0.3335 |
| 0.1289 | 9.23 | 86200 | 0.4351 | 0.3380 |
| 0.1289 | 9.24 | 86300 | 0.4350 | 0.3427 |
| 0.1289 | 9.25 | 86400 | 0.4362 | 0.3320 |
| 0.1333 | 9.26 | 86500 | 0.4260 | 0.3342 |
| 0.1333 | 9.27 | 86600 | 0.4357 | 0.3360 |
| 0.1333 | 9.28 | 86700 | 0.4505 | 0.3372 |
| 0.1333 | 9.29 | 86800 | 0.4342 | 0.3359 |
| 0.1333 | 9.3 | 86900 | 0.4295 | 0.3367 |
| 0.1318 | 9.31 | 87000 | 0.4320 | 0.3335 |
| 0.1318 | 9.33 | 87100 | 0.4332 | 0.3344 |
| 0.1318 | 9.34 | 87200 | 0.4373 | 0.3330 |
| 0.1318 | 9.35 | 87300 | 0.4490 | 0.3316 |
| 0.1318 | 9.36 | 87400 | 0.4188 | 0.3429 |
| 0.1275 | 9.37 | 87500 | 0.4502 | 0.3383 |
| 0.1275 | 9.38 | 87600 | 0.4463 | 0.3387 |
| 0.1275 | 9.39 | 87700 | 0.4385 | 0.3308 |
| 0.1275 | 9.4 | 87800 | 0.4464 | 0.3414 |
| 0.1275 | 9.41 | 87900 | 0.4563 | 0.3405 |
| 0.1331 | 9.42 | 88000 | 0.4286 | 0.3374 |
| 0.1331 | 9.43 | 88100 | 0.4389 | 0.3352 |
| 0.1331 | 9.44 | 88200 | 0.4301 | 0.3340 |
| 0.1331 | 9.45 | 88300 | 0.4417 | 0.3373 |
| 0.1331 | 9.46 | 88400 | 0.4450 | 0.3425 |
| 0.1266 | 9.48 | 88500 | 0.4456 | 0.3451 |
| 0.1266 | 9.49 | 88600 | 0.4517 | 0.3403 |
| 0.1266 | 9.5 | 88700 | 0.4447 | 0.3419 |
| 0.1266 | 9.51 | 88800 | 0.4486 | 0.3428 |
| 0.1266 | 9.52 | 88900 | 0.4591 | 0.3411 |
| 0.1316 | 9.53 | 89000 | 0.4481 | 0.3387 |
| 0.1316 | 9.54 | 89100 | 0.4308 | 0.3349 |
| 0.1316 | 9.55 | 89200 | 0.4411 | 0.3405 |
| 0.1316 | 9.56 | 89300 | 0.4378 | 0.3390 |
| 0.1316 | 9.57 | 89400 | 0.4448 | 0.3365 |
| 0.1325 | 9.58 | 89500 | 0.4575 | 0.3416 |
| 0.1325 | 9.59 | 89600 | 0.4608 | 0.3422 |
| 0.1325 | 9.6 | 89700 | 0.4396 | 0.3350 |
| 0.1325 | 9.61 | 89800 | 0.4380 | 0.3398 |
| 0.1325 | 9.63 | 89900 | 0.4337 | 0.3388 |
| 0.1324 | 9.64 | 90000 | 0.4376 | 0.3388 |
| 0.1324 | 9.65 | 90100 | 0.4185 | 0.3380 |
| 0.1324 | 9.66 | 90200 | 0.4394 | 0.3384 |
| 0.1324 | 9.67 | 90300 | 0.4472 | 0.3400 |
| 0.1324 | 9.68 | 90400 | 0.4523 | 0.3390 |
| 0.1361 | 9.69 | 90500 | 0.4466 | 0.3389 |
| 0.1361 | 9.7 | 90600 | 0.4414 | 0.3383 |
| 0.1361 | 9.71 | 90700 | 0.4288 | 0.3348 |
| 0.1361 | 9.72 | 90800 | 0.4445 | 0.3374 |
| 0.1361 | 9.73 | 90900 | 0.4252 | 0.3322 |
| 0.1353 | 9.74 | 91000 | 0.4312 | 0.3338 |
| 0.1353 | 9.75 | 91100 | 0.4326 | 0.3319 |
| 0.1353 | 9.76 | 91200 | 0.4212 | 0.3400 |
| 0.1353 | 9.78 | 91300 | 0.4191 | 0.3374 |
| 0.1353 | 9.79 | 91400 | 0.4399 | 0.3332 |
| 0.1308 | 9.8 | 91500 | 0.4340 | 0.3349 |
| 0.1308 | 9.81 | 91600 | 0.4280 | 0.3379 |
| 0.1308 | 9.82 | 91700 | 0.4419 | 0.3376 |
| 0.1308 | 9.83 | 91800 | 0.4309 | 0.3333 |
| 0.1308 | 9.84 | 91900 | 0.4274 | 0.3352 |
| 0.1321 | 9.85 | 92000 | 0.4147 | 0.3337 |
| 0.1321 | 9.86 | 92100 | 0.4252 | 0.3316 |
| 0.1321 | 9.87 | 92200 | 0.4378 | 0.3381 |
| 0.1321 | 9.88 | 92300 | 0.4265 | 0.3355 |
| 0.1321 | 9.89 | 92400 | 0.4247 | 0.3331 |
| 0.1358 | 9.9 | 92500 | 0.4099 | 0.3379 |
| 0.1358 | 9.91 | 92600 | 0.4142 | 0.3356 |
| 0.1358 | 9.93 | 92700 | 0.4220 | 0.3332 |
| 0.1358 | 9.94 | 92800 | 0.4219 | 0.3369 |
| 0.1358 | 9.95 | 92900 | 0.4178 | 0.3332 |
| 0.1331 | 9.96 | 93000 | 0.4305 | 0.3353 |
| 0.1331 | 9.97 | 93100 | 0.4324 | 0.3307 |
| 0.1331 | 9.98 | 93200 | 0.4315 | 0.3344 |
| 0.1331 | 9.99 | 93300 | 0.4212 | 0.3314 |
| 0.1331 | 10.0 | 93400 | 0.4203 | 0.3332 |
| 0.1304 | 10.01 | 93500 | 0.4424 | 0.3351 |
| 0.1304 | 10.02 | 93600 | 0.4474 | 0.3341 |
| 0.1304 | 10.03 | 93700 | 0.4466 | 0.3378 |
| 0.1304 | 10.04 | 93800 | 0.4388 | 0.3327 |
| 0.1304 | 10.05 | 93900 | 0.4312 | 0.3360 |
| 0.1152 | 10.06 | 94000 | 0.4471 | 0.3307 |
| 0.1152 | 10.07 | 94100 | 0.4472 | 0.3316 |
| 0.1152 | 10.09 | 94200 | 0.4462 | 0.3324 |
| 0.1152 | 10.1 | 94300 | 0.4383 | 0.3344 |
| 0.1152 | 10.11 | 94400 | 0.4671 | 0.3365 |
| 0.1097 | 10.12 | 94500 | 0.4596 | 0.3307 |
| 0.1097 | 10.13 | 94600 | 0.4517 | 0.3382 |
| 0.1097 | 10.14 | 94700 | 0.4285 | 0.3380 |
| 0.1097 | 10.15 | 94800 | 0.4628 | 0.3363 |
| 0.1097 | 10.16 | 94900 | 0.4478 | 0.3365 |
| 0.1153 | 10.17 | 95000 | 0.4464 | 0.3346 |
| 0.1153 | 10.18 | 95100 | 0.4432 | 0.3392 |
| 0.1153 | 10.19 | 95200 | 0.4326 | 0.3330 |
| 0.1153 | 10.2 | 95300 | 0.4480 | 0.3327 |
| 0.1153 | 10.21 | 95400 | 0.4436 | 0.3260 |
| 0.1149 | 10.22 | 95500 | 0.4549 | 0.3311 |
| 0.1149 | 10.24 | 95600 | 0.4573 | 0.3353 |
| 0.1149 | 10.25 | 95700 | 0.4373 | 0.3369 |
| 0.1149 | 10.26 | 95800 | 0.4459 | 0.3358 |
| 0.1149 | 10.27 | 95900 | 0.4288 | 0.3270 |
| 0.1169 | 10.28 | 96000 | 0.4474 | 0.3330 |
| 0.1169 | 10.29 | 96100 | 0.4524 | 0.3298 |
| 0.1169 | 10.3 | 96200 | 0.4517 | 0.3258 |
| 0.1169 | 10.31 | 96300 | 0.4366 | 0.3288 |
| 0.1169 | 10.32 | 96400 | 0.4574 | 0.3324 |
| 0.1137 | 10.33 | 96500 | 0.4507 | 0.3343 |
| 0.1137 | 10.34 | 96600 | 0.4414 | 0.3301 |
| 0.1137 | 10.35 | 96700 | 0.4524 | 0.3366 |
| 0.1137 | 10.36 | 96800 | 0.4563 | 0.3435 |
| 0.1137 | 10.37 | 96900 | 0.4315 | 0.3375 |
| 0.1162 | 10.39 | 97000 | 0.4429 | 0.3365 |
| 0.1162 | 10.4 | 97100 | 0.4489 | 0.3380 |
| 0.1162 | 10.41 | 97200 | 0.4352 | 0.3357 |
| 0.1162 | 10.42 | 97300 | 0.4390 | 0.3319 |
| 0.1162 | 10.43 | 97400 | 0.4570 | 0.3303 |
| 0.1151 | 10.44 | 97500 | 0.4692 | 0.3344 |
| 0.1151 | 10.45 | 97600 | 0.4605 | 0.3332 |
| 0.1151 | 10.46 | 97700 | 0.4457 | 0.3238 |
| 0.1151 | 10.47 | 97800 | 0.4298 | 0.3304 |
| 0.1151 | 10.48 | 97900 | 0.4619 | 0.3274 |
| 0.1105 | 10.49 | 98000 | 0.4362 | 0.3244 |
| 0.1105 | 10.5 | 98100 | 0.4568 | 0.3289 |
| 0.1105 | 10.51 | 98200 | 0.4522 | 0.3336 |
| 0.1105 | 10.52 | 98300 | 0.4302 | 0.3257 |
| 0.1105 | 10.54 | 98400 | 0.4505 | 0.3238 |
| 0.1164 | 10.55 | 98500 | 0.4430 | 0.3301 |
| 0.1164 | 10.56 | 98600 | 0.4575 | 0.3283 |
| 0.1164 | 10.57 | 98700 | 0.4447 | 0.3277 |
| 0.1164 | 10.58 | 98800 | 0.4400 | 0.3301 |
| 0.1164 | 10.59 | 98900 | 0.4427 | 0.3288 |
| 0.1113 | 10.6 | 99000 | 0.4538 | 0.3248 |
| 0.1113 | 10.61 | 99100 | 0.4519 | 0.3298 |
| 0.1113 | 10.62 | 99200 | 0.4290 | 0.3249 |
| 0.1113 | 10.63 | 99300 | 0.4501 | 0.3220 |
| 0.1113 | 10.64 | 99400 | 0.4410 | 0.3218 |
| 0.1159 | 10.65 | 99500 | 0.4478 | 0.3211 |
| 0.1159 | 10.66 | 99600 | 0.4462 | 0.3250 |
| 0.1159 | 10.67 | 99700 | 0.4543 | 0.3302 |
| 0.1159 | 10.69 | 99800 | 0.4462 | 0.3301 |
| 0.1159 | 10.7 | 99900 | 0.4468 | 0.3229 |
| 0.1161 | 10.71 | 100000 | 0.4515 | 0.3241 |
| 0.1161 | 10.72 | 100100 | 0.4404 | 0.3276 |
| 0.1161 | 10.73 | 100200 | 0.4439 | 0.3222 |
| 0.1161 | 10.74 | 100300 | 0.4392 | 0.3257 |
| 0.1161 | 10.75 | 100400 | 0.4476 | 0.3314 |
| 0.1199 | 10.76 | 100500 | 0.4493 | 0.3270 |
| 0.1199 | 10.77 | 100600 | 0.4462 | 0.3224 |
| 0.1199 | 10.78 | 100700 | 0.4467 | 0.3311 |
| 0.1199 | 10.79 | 100800 | 0.4198 | 0.3228 |
| 0.1199 | 10.8 | 100900 | 0.4349 | 0.3225 |
| 0.1146 | 10.81 | 101000 | 0.4371 | 0.3272 |
| 0.1146 | 10.82 | 101100 | 0.4525 | 0.3210 |
| 0.1146 | 10.84 | 101200 | 0.4293 | 0.3219 |
| 0.1146 | 10.85 | 101300 | 0.4238 | 0.3216 |
| 0.1146 | 10.86 | 101400 | 0.4377 | 0.3252 |
| 0.118 | 10.87 | 101500 | 0.4371 | 0.3208 |
| 0.118 | 10.88 | 101600 | 0.4216 | 0.3174 |
| 0.118 | 10.89 | 101700 | 0.4312 | 0.3189 |
| 0.118 | 10.9 | 101800 | 0.4317 | 0.3204 |
| 0.118 | 10.91 | 101900 | 0.4303 | 0.3235 |
| 0.114 | 10.92 | 102000 | 0.4416 | 0.3158 |
| 0.114 | 10.93 | 102100 | 0.4240 | 0.3195 |
| 0.114 | 10.94 | 102200 | 0.4340 | 0.3149 |
| 0.114 | 10.95 | 102300 | 0.4311 | 0.3215 |
| 0.114 | 10.96 | 102400 | 0.4261 | 0.3238 |
| 0.1152 | 10.97 | 102500 | 0.4263 | 0.3206 |
| 0.1152 | 10.98 | 102600 | 0.4325 | 0.3294 |
| 0.1152 | 11.0 | 102700 | 0.4327 | 0.3187 |
| 0.1152 | 11.01 | 102800 | 0.4423 | 0.3195 |
| 0.1152 | 11.02 | 102900 | 0.4341 | 0.3277 |
| 0.1084 | 11.03 | 103000 | 0.4232 | 0.3243 |
| 0.1084 | 11.04 | 103100 | 0.4355 | 0.3184 |
| 0.1084 | 11.05 | 103200 | 0.4374 | 0.3274 |
| 0.1084 | 11.06 | 103300 | 0.4484 | 0.3305 |
| 0.1084 | 11.07 | 103400 | 0.4423 | 0.3226 |
| 0.1003 | 11.08 | 103500 | 0.4518 | 0.3224 |
| 0.1003 | 11.09 | 103600 | 0.4518 | 0.3243 |
| 0.1003 | 11.1 | 103700 | 0.4282 | 0.3207 |
| 0.1003 | 11.11 | 103800 | 0.4418 | 0.3220 |
| 0.1003 | 11.12 | 103900 | 0.4411 | 0.3216 |
| 0.1009 | 11.13 | 104000 | 0.4474 | 0.3238 |
| 0.1009 | 11.15 | 104100 | 0.4406 | 0.3245 |
| 0.1009 | 11.16 | 104200 | 0.4384 | 0.3242 |
| 0.1009 | 11.17 | 104300 | 0.4702 | 0.3265 |
| 0.1009 | 11.18 | 104400 | 0.4611 | 0.3266 |
| 0.0992 | 11.19 | 104500 | 0.4425 | 0.3211 |
| 0.0992 | 11.2 | 104600 | 0.4575 | 0.3222 |
| 0.0992 | 11.21 | 104700 | 0.4449 | 0.3208 |
| 0.0992 | 11.22 | 104800 | 0.4715 | 0.3208 |
| 0.0992 | 11.23 | 104900 | 0.4469 | 0.3223 |
| 0.1021 | 11.24 | 105000 | 0.4536 | 0.3225 |
| 0.1021 | 11.25 | 105100 | 0.4629 | 0.3234 |
| 0.1021 | 11.26 | 105200 | 0.4550 | 0.3205 |
| 0.1021 | 11.27 | 105300 | 0.4598 | 0.3213 |
| 0.1021 | 11.28 | 105400 | 0.4522 | 0.3179 |
| 0.1021 | 11.3 | 105500 | 0.4658 | 0.3211 |
| 0.1021 | 11.31 | 105600 | 0.4664 | 0.3196 |
| 0.1021 | 11.32 | 105700 | 0.4736 | 0.3177 |
| 0.1021 | 11.33 | 105800 | 0.4587 | 0.3158 |
| 0.1021 | 11.34 | 105900 | 0.4589 | 0.3194 |
| 0.1025 | 11.35 | 106000 | 0.4692 | 0.3214 |
| 0.1025 | 11.36 | 106100 | 0.4382 | 0.3181 |
| 0.1025 | 11.37 | 106200 | 0.4556 | 0.3185 |
| 0.1025 | 11.38 | 106300 | 0.4445 | 0.3191 |
| 0.1025 | 11.39 | 106400 | 0.4379 | 0.3163 |
| 0.104 | 11.4 | 106500 | 0.4454 | 0.3220 |
| 0.104 | 11.41 | 106600 | 0.4463 | 0.3201 |
| 0.104 | 11.42 | 106700 | 0.4550 | 0.3173 |
| 0.104 | 11.43 | 106800 | 0.4404 | 0.3168 |
| 0.104 | 11.45 | 106900 | 0.4569 | 0.3170 |
| 0.1016 | 11.46 | 107000 | 0.4529 | 0.3168 |
| 0.1016 | 11.47 | 107100 | 0.4587 | 0.3173 |
| 0.1016 | 11.48 | 107200 | 0.4505 | 0.3172 |
| 0.1016 | 11.49 | 107300 | 0.4489 | 0.3159 |
| 0.1016 | 11.5 | 107400 | 0.4528 | 0.3130 |
| 0.1001 | 11.51 | 107500 | 0.4473 | 0.3181 |
| 0.1001 | 11.52 | 107600 | 0.4434 | 0.3176 |
| 0.1001 | 11.53 | 107700 | 0.4597 | 0.3186 |
| 0.1001 | 11.54 | 107800 | 0.4351 | 0.3159 |
| 0.1001 | 11.55 | 107900 | 0.4471 | 0.3185 |
| 0.1005 | 11.56 | 108000 | 0.4457 | 0.3191 |
| 0.1005 | 11.57 | 108100 | 0.4544 | 0.3293 |
| 0.1005 | 11.58 | 108200 | 0.4436 | 0.3221 |
| 0.1005 | 11.6 | 108300 | 0.4642 | 0.3270 |
| 0.1005 | 11.61 | 108400 | 0.4474 | 0.3270 |
| 0.1031 | 11.62 | 108500 | 0.4458 | 0.3196 |
| 0.1031 | 11.63 | 108600 | 0.4723 | 0.3205 |
| 0.1031 | 11.64 | 108700 | 0.4507 | 0.3226 |
| 0.1031 | 11.65 | 108800 | 0.4424 | 0.3213 |
| 0.1031 | 11.66 | 108900 | 0.4511 | 0.3213 |
| 0.1014 | 11.67 | 109000 | 0.4422 | 0.3205 |
| 0.1014 | 11.68 | 109100 | 0.4498 | 0.3180 |
| 0.1014 | 11.69 | 109200 | 0.4303 | 0.3167 |
| 0.1014 | 11.7 | 109300 | 0.4483 | 0.3108 |
| 0.1014 | 11.71 | 109400 | 0.4548 | 0.3169 |
| 0.0981 | 11.72 | 109500 | 0.4406 | 0.3122 |
| 0.0981 | 11.73 | 109600 | 0.4293 | 0.3114 |
| 0.0981 | 11.75 | 109700 | 0.4369 | 0.3159 |
| 0.0981 | 11.76 | 109800 | 0.4364 | 0.3164 |
| 0.0981 | 11.77 | 109900 | 0.4358 | 0.3189 |
| 0.1023 | 11.78 | 110000 | 0.4281 | 0.3183 |
| 0.1023 | 11.79 | 110100 | 0.4404 | 0.3159 |
| 0.1023 | 11.8 | 110200 | 0.4471 | 0.3135 |
| 0.1023 | 11.81 | 110300 | 0.4498 | 0.3201 |
| 0.1023 | 11.82 | 110400 | 0.4527 | 0.3161 |
| 0.0988 | 11.83 | 110500 | 0.4440 | 0.3173 |
| 0.0988 | 11.84 | 110600 | 0.4356 | 0.3136 |
| 0.0988 | 11.85 | 110700 | 0.4308 | 0.3135 |
| 0.0988 | 11.86 | 110800 | 0.4294 | 0.3192 |
| 0.0988 | 11.87 | 110900 | 0.4241 | 0.3168 |
| 0.1022 | 11.88 | 111000 | 0.4420 | 0.3157 |
| 0.1022 | 11.9 | 111100 | 0.4313 | 0.3125 |
| 0.1022 | 11.91 | 111200 | 0.4213 | 0.3168 |
| 0.1022 | 11.92 | 111300 | 0.4352 | 0.3135 |
| 0.1022 | 11.93 | 111400 | 0.4297 | 0.3116 |
| 0.1032 | 11.94 | 111500 | 0.4218 | 0.3137 |
| 0.1032 | 11.95 | 111600 | 0.4334 | 0.3123 |
| 0.1032 | 11.96 | 111700 | 0.4373 | 0.3175 |
| 0.1032 | 11.97 | 111800 | 0.4299 | 0.3160 |
| 0.1032 | 11.98 | 111900 | 0.4326 | 0.3189 |
| 0.0969 | 11.99 | 112000 | 0.4208 | 0.3186 |
| 0.0969 | 12.0 | 112100 | 0.4385 | 0.3169 |
| 0.0969 | 12.01 | 112200 | 0.4453 | 0.3156 |
| 0.0969 | 12.02 | 112300 | 0.4596 | 0.3133 |
| 0.0969 | 12.03 | 112400 | 0.4509 | 0.3093 |
| 0.0901 | 12.04 | 112500 | 0.4535 | 0.3138 |
| 0.0901 | 12.06 | 112600 | 0.4371 | 0.3144 |
| 0.0901 | 12.07 | 112700 | 0.4499 | 0.3154 |
| 0.0901 | 12.08 | 112800 | 0.4615 | 0.3198 |
| 0.0901 | 12.09 | 112900 | 0.4523 | 0.3177 |
| 0.0889 | 12.1 | 113000 | 0.4412 | 0.3130 |
| 0.0889 | 12.11 | 113100 | 0.4471 | 0.3181 |
| 0.0889 | 12.12 | 113200 | 0.4530 | 0.3169 |
| 0.0889 | 12.13 | 113300 | 0.4670 | 0.3149 |
| 0.0889 | 12.14 | 113400 | 0.4594 | 0.3141 |
| 0.0917 | 12.15 | 113500 | 0.4623 | 0.3127 |
| 0.0917 | 12.16 | 113600 | 0.4460 | 0.3133 |
| 0.0917 | 12.17 | 113700 | 0.4512 | 0.3191 |
| 0.0917 | 12.18 | 113800 | 0.4681 | 0.3136 |
| 0.0917 | 12.19 | 113900 | 0.4564 | 0.3129 |
| 0.0906 | 12.21 | 114000 | 0.4482 | 0.3107 |
| 0.0906 | 12.22 | 114100 | 0.4595 | 0.3133 |
| 0.0906 | 12.23 | 114200 | 0.4510 | 0.3118 |
| 0.0906 | 12.24 | 114300 | 0.4472 | 0.3131 |
| 0.0906 | 12.25 | 114400 | 0.4499 | 0.3130 |
| 0.0918 | 12.26 | 114500 | 0.4503 | 0.3138 |
| 0.0918 | 12.27 | 114600 | 0.4518 | 0.3135 |
| 0.0918 | 12.28 | 114700 | 0.4493 | 0.3114 |
| 0.0918 | 12.29 | 114800 | 0.4574 | 0.3133 |
| 0.0918 | 12.3 | 114900 | 0.4683 | 0.3200 |
| 0.0869 | 12.31 | 115000 | 0.4608 | 0.3165 |
| 0.0869 | 12.32 | 115100 | 0.4618 | 0.3183 |
| 0.0869 | 12.33 | 115200 | 0.4689 | 0.3173 |
| 0.0869 | 12.34 | 115300 | 0.4681 | 0.3224 |
| 0.0869 | 12.36 | 115400 | 0.4576 | 0.3231 |
| 0.0885 | 12.37 | 115500 | 0.4831 | 0.3176 |
| 0.0885 | 12.38 | 115600 | 0.4602 | 0.3181 |
| 0.0885 | 12.39 | 115700 | 0.4493 | 0.3168 |
| 0.0885 | 12.4 | 115800 | 0.4564 | 0.3149 |
| 0.0885 | 12.41 | 115900 | 0.4585 | 0.3158 |
| 0.091 | 12.42 | 116000 | 0.4713 | 0.3193 |
| 0.091 | 12.43 | 116100 | 0.4581 | 0.3139 |
| 0.091 | 12.44 | 116200 | 0.4637 | 0.3131 |
| 0.091 | 12.45 | 116300 | 0.4572 | 0.3124 |
| 0.091 | 12.46 | 116400 | 0.4489 | 0.3163 |
| 0.0886 | 12.47 | 116500 | 0.4679 | 0.3159 |
| 0.0886 | 12.48 | 116600 | 0.4712 | 0.3151 |
| 0.0886 | 12.49 | 116700 | 0.4750 | 0.3186 |
| 0.0886 | 12.51 | 116800 | 0.4673 | 0.3176 |
| 0.0886 | 12.52 | 116900 | 0.4601 | 0.3113 |
| 0.0917 | 12.53 | 117000 | 0.4341 | 0.3125 |
| 0.0917 | 12.54 | 117100 | 0.4462 | 0.3077 |
| 0.0917 | 12.55 | 117200 | 0.4502 | 0.3099 |
| 0.0917 | 12.56 | 117300 | 0.4482 | 0.3116 |
| 0.0917 | 12.57 | 117400 | 0.4459 | 0.3131 |
| 0.0881 | 12.58 | 117500 | 0.4464 | 0.3122 |
| 0.0881 | 12.59 | 117600 | 0.4471 | 0.3125 |
| 0.0881 | 12.6 | 117700 | 0.4319 | 0.3122 |
| 0.0881 | 12.61 | 117800 | 0.4421 | 0.3103 |
| 0.0881 | 12.62 | 117900 | 0.4326 | 0.3108 |
| 0.0913 | 12.63 | 118000 | 0.4414 | 0.3068 |
| 0.0913 | 12.64 | 118100 | 0.4421 | 0.3083 |
| 0.0913 | 12.66 | 118200 | 0.4449 | 0.3103 |
| 0.0913 | 12.67 | 118300 | 0.4380 | 0.3128 |
| 0.0913 | 12.68 | 118400 | 0.4390 | 0.3136 |
| 0.0921 | 12.69 | 118500 | 0.4452 | 0.3104 |
| 0.0921 | 12.7 | 118600 | 0.4378 | 0.3122 |
| 0.0921 | 12.71 | 118700 | 0.4459 | 0.3080 |
| 0.0921 | 12.72 | 118800 | 0.4369 | 0.3051 |
| 0.0921 | 12.73 | 118900 | 0.4474 | 0.3076 |
| 0.0886 | 12.74 | 119000 | 0.4508 | 0.3066 |
| 0.0886 | 12.75 | 119100 | 0.4456 | 0.3097 |
| 0.0886 | 12.76 | 119200 | 0.4503 | 0.3078 |
| 0.0886 | 12.77 | 119300 | 0.4460 | 0.3081 |
| 0.0886 | 12.78 | 119400 | 0.4404 | 0.3080 |
| 0.0897 | 12.79 | 119500 | 0.4351 | 0.3100 |
| 0.0897 | 12.81 | 119600 | 0.4446 | 0.3120 |
| 0.0897 | 12.82 | 119700 | 0.4407 | 0.3098 |
| 0.0897 | 12.83 | 119800 | 0.4406 | 0.3084 |
| 0.0897 | 12.84 | 119900 | 0.4492 | 0.3067 |
| 0.09 | 12.85 | 120000 | 0.4546 | 0.3098 |
| 0.09 | 12.86 | 120100 | 0.4547 | 0.3074 |
| 0.09 | 12.87 | 120200 | 0.4517 | 0.3111 |
| 0.09 | 12.88 | 120300 | 0.4320 | 0.3064 |
| 0.09 | 12.89 | 120400 | 0.4294 | 0.3072 |
| 0.0898 | 12.9 | 120500 | 0.4412 | 0.3050 |
| 0.0898 | 12.91 | 120600 | 0.4254 | 0.3074 |
| 0.0898 | 12.92 | 120700 | 0.4409 | 0.3071 |
| 0.0898 | 12.93 | 120800 | 0.4362 | 0.3071 |
| 0.0898 | 12.94 | 120900 | 0.4579 | 0.3090 |
| 0.0892 | 12.95 | 121000 | 0.4492 | 0.3059 |
| 0.0892 | 12.97 | 121100 | 0.4404 | 0.3105 |
| 0.0892 | 12.98 | 121200 | 0.4365 | 0.3066 |
| 0.0892 | 12.99 | 121300 | 0.4368 | 0.3048 |
| 0.0892 | 13.0 | 121400 | 0.4410 | 0.3033 |
| 0.085 | 13.01 | 121500 | 0.4450 | 0.3047 |
| 0.085 | 13.02 | 121600 | 0.4633 | 0.3013 |
| 0.085 | 13.03 | 121700 | 0.4600 | 0.3054 |
| 0.085 | 13.04 | 121800 | 0.4541 | 0.3047 |
| 0.085 | 13.05 | 121900 | 0.4546 | 0.3058 |
| 0.0791 | 13.06 | 122000 | 0.4536 | 0.3045 |
| 0.0791 | 13.07 | 122100 | 0.4589 | 0.3066 |
| 0.0791 | 13.08 | 122200 | 0.4581 | 0.3057 |
| 0.0791 | 13.09 | 122300 | 0.4546 | 0.3048 |
| 0.0791 | 13.1 | 122400 | 0.4673 | 0.3006 |
| 0.0789 | 13.12 | 122500 | 0.4551 | 0.3019 |
| 0.0789 | 13.13 | 122600 | 0.4467 | 0.3025 |
| 0.0789 | 13.14 | 122700 | 0.4593 | 0.3015 |
| 0.0789 | 13.15 | 122800 | 0.4598 | 0.3037 |
| 0.0789 | 13.16 | 122900 | 0.4532 | 0.3038 |
| 0.077 | 13.17 | 123000 | 0.4607 | 0.3015 |
| 0.077 | 13.18 | 123100 | 0.4385 | 0.3005 |
| 0.077 | 13.19 | 123200 | 0.4590 | 0.3041 |
| 0.077 | 13.2 | 123300 | 0.4359 | 0.3047 |
| 0.077 | 13.21 | 123400 | 0.4458 | 0.3039 |
| 0.0771 | 13.22 | 123500 | 0.4506 | 0.3075 |
| 0.0771 | 13.23 | 123600 | 0.4457 | 0.3079 |
| 0.0771 | 13.24 | 123700 | 0.4448 | 0.3048 |
| 0.0771 | 13.25 | 123800 | 0.4398 | 0.3036 |
| 0.0771 | 13.27 | 123900 | 0.4510 | 0.3055 |
| 0.0804 | 13.28 | 124000 | 0.4507 | 0.3059 |
| 0.0804 | 13.29 | 124100 | 0.4544 | 0.3076 |
| 0.0804 | 13.3 | 124200 | 0.4534 | 0.3073 |
| 0.0804 | 13.31 | 124300 | 0.4441 | 0.3061 |
| 0.0804 | 13.32 | 124400 | 0.4391 | 0.3075 |
| 0.0774 | 13.33 | 124500 | 0.4527 | 0.3063 |
| 0.0774 | 13.34 | 124600 | 0.4638 | 0.3057 |
| 0.0774 | 13.35 | 124700 | 0.4541 | 0.3064 |
| 0.0774 | 13.36 | 124800 | 0.4617 | 0.3078 |
| 0.0774 | 13.37 | 124900 | 0.4584 | 0.3041 |
| 0.0795 | 13.38 | 125000 | 0.4663 | 0.3032 |
| 0.0795 | 13.39 | 125100 | 0.4546 | 0.3025 |
| 0.0795 | 13.4 | 125200 | 0.4616 | 0.3021 |
| 0.0795 | 13.42 | 125300 | 0.4603 | 0.3016 |
| 0.0795 | 13.43 | 125400 | 0.4616 | 0.3040 |
| 0.0791 | 13.44 | 125500 | 0.4548 | 0.3021 |
| 0.0791 | 13.45 | 125600 | 0.4560 | 0.3025 |
| 0.0791 | 13.46 | 125700 | 0.4516 | 0.3037 |
| 0.0791 | 13.47 | 125800 | 0.4500 | 0.3013 |
| 0.0791 | 13.48 | 125900 | 0.4540 | 0.3009 |
| 0.0776 | 13.49 | 126000 | 0.4581 | 0.3026 |
| 0.0776 | 13.5 | 126100 | 0.4598 | 0.3028 |
| 0.0776 | 13.51 | 126200 | 0.4587 | 0.3038 |
| 0.0776 | 13.52 | 126300 | 0.4514 | 0.3024 |
| 0.0776 | 13.53 | 126400 | 0.4495 | 0.3036 |
| 0.0793 | 13.54 | 126500 | 0.4556 | 0.3016 |
| 0.0793 | 13.55 | 126600 | 0.4603 | 0.3025 |
| 0.0793 | 13.57 | 126700 | 0.4496 | 0.2995 |
| 0.0793 | 13.58 | 126800 | 0.4483 | 0.2969 |
| 0.0793 | 13.59 | 126900 | 0.4462 | 0.2980 |
| 0.0816 | 13.6 | 127000 | 0.4521 | 0.2982 |
| 0.0816 | 13.61 | 127100 | 0.4580 | 0.3019 |
| 0.0816 | 13.62 | 127200 | 0.4669 | 0.3009 |
| 0.0816 | 13.63 | 127300 | 0.4513 | 0.3017 |
| 0.0816 | 13.64 | 127400 | 0.4602 | 0.3015 |
| 0.0779 | 13.65 | 127500 | 0.4592 | 0.2998 |
| 0.0779 | 13.66 | 127600 | 0.4700 | 0.2981 |
| 0.0779 | 13.67 | 127700 | 0.4727 | 0.2978 |
| 0.0779 | 13.68 | 127800 | 0.4600 | 0.2983 |
| 0.0779 | 13.69 | 127900 | 0.4472 | 0.2978 |
| 0.0779 | 13.7 | 128000 | 0.4483 | 0.2984 |
| 0.0779 | 13.72 | 128100 | 0.4512 | 0.2968 |
| 0.0779 | 13.73 | 128200 | 0.4549 | 0.2988 |
| 0.0779 | 13.74 | 128300 | 0.4576 | 0.2992 |
| 0.0779 | 13.75 | 128400 | 0.4400 | 0.2974 |
| 0.0793 | 13.76 | 128500 | 0.4433 | 0.3009 |
| 0.0793 | 13.77 | 128600 | 0.4456 | 0.2982 |
| 0.0793 | 13.78 | 128700 | 0.4560 | 0.3019 |
| 0.0793 | 13.79 | 128800 | 0.4551 | 0.3008 |
| 0.0793 | 13.8 | 128900 | 0.4513 | 0.3007 |
| 0.0769 | 13.81 | 129000 | 0.4518 | 0.3008 |
| 0.0769 | 13.82 | 129100 | 0.4567 | 0.2981 |
| 0.0769 | 13.83 | 129200 | 0.4437 | 0.2985 |
| 0.0769 | 13.84 | 129300 | 0.4424 | 0.2970 |
| 0.0769 | 13.85 | 129400 | 0.4423 | 0.3010 |
| 0.0785 | 13.87 | 129500 | 0.4495 | 0.2999 |
| 0.0785 | 13.88 | 129600 | 0.4483 | 0.2975 |
| 0.0785 | 13.89 | 129700 | 0.4485 | 0.2982 |
| 0.0785 | 13.9 | 129800 | 0.4429 | 0.2972 |
| 0.0785 | 13.91 | 129900 | 0.4430 | 0.2958 |
| 0.0792 | 13.92 | 130000 | 0.4495 | 0.2954 |
| 0.0792 | 13.93 | 130100 | 0.4485 | 0.2947 |
| 0.0792 | 13.94 | 130200 | 0.4395 | 0.2972 |
| 0.0792 | 13.95 | 130300 | 0.4379 | 0.2973 |
| 0.0792 | 13.96 | 130400 | 0.4428 | 0.2989 |
| 0.0795 | 13.97 | 130500 | 0.4385 | 0.3000 |
| 0.0795 | 13.98 | 130600 | 0.4490 | 0.2983 |
| 0.0795 | 13.99 | 130700 | 0.4568 | 0.2970 |
| 0.0795 | 14.0 | 130800 | 0.4482 | 0.2963 |
| 0.0795 | 14.01 | 130900 | 0.4479 | 0.2962 |
| 0.075 | 14.03 | 131000 | 0.4565 | 0.2968 |
| 0.075 | 14.04 | 131100 | 0.4623 | 0.2962 |
| 0.075 | 14.05 | 131200 | 0.4617 | 0.2965 |
| 0.075 | 14.06 | 131300 | 0.4687 | 0.2949 |
| 0.075 | 14.07 | 131400 | 0.4718 | 0.2929 |
| 0.0709 | 14.08 | 131500 | 0.4720 | 0.2945 |
| 0.0709 | 14.09 | 131600 | 0.4604 | 0.2953 |
| 0.0709 | 14.1 | 131700 | 0.4655 | 0.2955 |
| 0.0709 | 14.11 | 131800 | 0.4695 | 0.2958 |
| 0.0709 | 14.12 | 131900 | 0.4666 | 0.2945 |
| 0.0705 | 14.13 | 132000 | 0.4605 | 0.2959 |
| 0.0705 | 14.14 | 132100 | 0.4581 | 0.2947 |
| 0.0705 | 14.15 | 132200 | 0.4597 | 0.2948 |
| 0.0705 | 14.16 | 132300 | 0.4612 | 0.2943 |
| 0.0705 | 14.18 | 132400 | 0.4611 | 0.2959 |
| 0.0727 | 14.19 | 132500 | 0.4569 | 0.2958 |
| 0.0727 | 14.2 | 132600 | 0.4556 | 0.2951 |
| 0.0727 | 14.21 | 132700 | 0.4597 | 0.2955 |
| 0.0727 | 14.22 | 132800 | 0.4472 | 0.2935 |
| 0.0727 | 14.23 | 132900 | 0.4573 | 0.2943 |
| 0.0723 | 14.24 | 133000 | 0.4572 | 0.2943 |
| 0.0723 | 14.25 | 133100 | 0.4582 | 0.2956 |
| 0.0723 | 14.26 | 133200 | 0.4599 | 0.2968 |
| 0.0723 | 14.27 | 133300 | 0.4633 | 0.2962 |
| 0.0723 | 14.28 | 133400 | 0.4604 | 0.2972 |
| 0.071 | 14.29 | 133500 | 0.4587 | 0.2971 |
| 0.071 | 14.3 | 133600 | 0.4598 | 0.2973 |
| 0.071 | 14.31 | 133700 | 0.4579 | 0.2976 |
| 0.071 | 14.33 | 133800 | 0.4539 | 0.2969 |
| 0.071 | 14.34 | 133900 | 0.4628 | 0.2961 |
| 0.0703 | 14.35 | 134000 | 0.4627 | 0.2974 |
| 0.0703 | 14.36 | 134100 | 0.4611 | 0.2974 |
| 0.0703 | 14.37 | 134200 | 0.4607 | 0.2977 |
| 0.0703 | 14.38 | 134300 | 0.4638 | 0.2983 |
| 0.0703 | 14.39 | 134400 | 0.4628 | 0.2969 |
| 0.0736 | 14.4 | 134500 | 0.4543 | 0.2965 |
| 0.0736 | 14.41 | 134600 | 0.4585 | 0.2963 |
| 0.0736 | 14.42 | 134700 | 0.4636 | 0.2950 |
| 0.0736 | 14.43 | 134800 | 0.4636 | 0.2964 |
| 0.0736 | 14.44 | 134900 | 0.4630 | 0.2958 |
| 0.0715 | 14.45 | 135000 | 0.4611 | 0.2968 |
| 0.0715 | 14.46 | 135100 | 0.4633 | 0.2966 |
| 0.0715 | 14.48 | 135200 | 0.4664 | 0.2954 |
| 0.0715 | 14.49 | 135300 | 0.4670 | 0.2945 |
| 0.0715 | 14.5 | 135400 | 0.4638 | 0.2961 |
| 0.073 | 14.51 | 135500 | 0.4635 | 0.2965 |
| 0.073 | 14.52 | 135600 | 0.4639 | 0.2956 |
| 0.073 | 14.53 | 135700 | 0.4617 | 0.2948 |
| 0.073 | 14.54 | 135800 | 0.4609 | 0.2933 |
| 0.073 | 14.55 | 135900 | 0.4614 | 0.2947 |
| 0.0717 | 14.56 | 136000 | 0.4567 | 0.2958 |
| 0.0717 | 14.57 | 136100 | 0.4615 | 0.2934 |
| 0.0717 | 14.58 | 136200 | 0.4606 | 0.2929 |
| 0.0717 | 14.59 | 136300 | 0.4652 | 0.2934 |
| 0.0717 | 14.6 | 136400 | 0.4664 | 0.2934 |
| 0.0717 | 14.61 | 136500 | 0.4657 | 0.2923 |
| 0.0717 | 14.63 | 136600 | 0.4633 | 0.2931 |
| 0.0717 | 14.64 | 136700 | 0.4624 | 0.2943 |
| 0.0717 | 14.65 | 136800 | 0.4615 | 0.2949 |
| 0.0717 | 14.66 | 136900 | 0.4619 | 0.2930 |
| 0.0707 | 14.67 | 137000 | 0.4608 | 0.2936 |
| 0.0707 | 14.68 | 137100 | 0.4615 | 0.2945 |
| 0.0707 | 14.69 | 137200 | 0.4605 | 0.2941 |
| 0.0707 | 14.7 | 137300 | 0.4598 | 0.2931 |
| 0.0707 | 14.71 | 137400 | 0.4596 | 0.2943 |
| 0.0694 | 14.72 | 137500 | 0.4624 | 0.2927 |
| 0.0694 | 14.73 | 137600 | 0.4614 | 0.2931 |
| 0.0694 | 14.74 | 137700 | 0.4621 | 0.2924 |
| 0.0694 | 14.75 | 137800 | 0.4589 | 0.2920 |
| 0.0694 | 14.76 | 137900 | 0.4590 | 0.2926 |
| 0.0706 | 14.78 | 138000 | 0.4588 | 0.2931 |
| 0.0706 | 14.79 | 138100 | 0.4583 | 0.2928 |
| 0.0706 | 14.8 | 138200 | 0.4552 | 0.2934 |
| 0.0706 | 14.81 | 138300 | 0.4551 | 0.2923 |
| 0.0706 | 14.82 | 138400 | 0.4555 | 0.2927 |
| 0.0717 | 14.83 | 138500 | 0.4547 | 0.2930 |
| 0.0717 | 14.84 | 138600 | 0.4546 | 0.2930 |
| 0.0717 | 14.85 | 138700 | 0.4553 | 0.2934 |
| 0.0717 | 14.86 | 138800 | 0.4554 | 0.2924 |
| 0.0717 | 14.87 | 138900 | 0.4573 | 0.2924 |
| 0.0722 | 14.88 | 139000 | 0.4582 | 0.2927 |
| 0.0722 | 14.89 | 139100 | 0.4586 | 0.2926 |
| 0.0722 | 14.9 | 139200 | 0.4570 | 0.2926 |
| 0.0722 | 14.91 | 139300 | 0.4571 | 0.2923 |
| 0.0722 | 14.93 | 139400 | 0.4564 | 0.2925 |
| 0.0698 | 14.94 | 139500 | 0.4573 | 0.2927 |
| 0.0698 | 14.95 | 139600 | 0.4574 | 0.2927 |
| 0.0698 | 14.96 | 139700 | 0.4573 | 0.2927 |
| 0.0698 | 14.97 | 139800 | 0.4576 | 0.2921 |
| 0.0698 | 14.98 | 139900 | 0.4578 | 0.2923 |
| 0.0705 | 14.99 | 140000 | 0.4579 | 0.2928 |
| 0.0705 | 15.0 | 140100 | 0.4578 | 0.2927 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hyesunyun/NonsenseUpdateDiffStringBart
|
hyesunyun
| 2022-02-08T04:10:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"diff generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- diff generation
datasets:
- nonsense corpus
metrics:
- rouge
---
hello! this is the pretrained BART. The dataset used for pretraining is nonsense summary corpus with output as diff.
|
shields/wav2vec2-base-dementiabank
|
shields
| 2022-02-08T02:53:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-dementiabank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-dementiabank
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 11.0473
- eval_wer: 1.0
- eval_runtime: 3.3353
- eval_samples_per_second: 2.399
- eval_steps_per_second: 0.3
- epoch: 3.12
- step: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
softcatala/wav2vec2-large-100k-voxpopuli-catala
|
softcatala
| 2022-02-08T02:20:32Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"speech-to-text",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- speech-to-text
license: apache-2.0
model-index:
- name: Catalan VoxPopuli Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 5.98
- name: Google Crowsourced Corpus WER
type: wer
value: 12.14
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 12.02
---
# Wav2Vec2-Large-100k-VoxPopuli-Català
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% |
| Audiobook “La llegenda de Sant Jordi” | 12.02% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
jgammack/distilbert-base-uncased-squad
|
jgammack
| 2022-02-08T01:36:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ccoreilly/wav2vec2-large-100k-voxpopuli-catala
|
ccoreilly
| 2022-02-08T00:59:52Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"speech-to-text",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- speech-to-text
license: apache-2.0
model-index:
- name: Catalan VoxPopuli Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 5.98
- name: Google Crowsourced Corpus WER
type: wer
value: 12.14
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 12.02
---
# Wav2Vec2-Large-100k-VoxPopuli-Català
**⚠️NOTICE⚠️: THIS MODEL HAS BEEN MOVED TO THE FOLLOWING URL:**
https://huggingface.co/softcatala/wav2vec2-large-100k-voxpopuli-catala
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test-filtered.csv)) | 5.98% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.14% |
| Audiobook “La llegenda de Sant Jordi” | 12.02% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-100k-voxpopuli-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
softcatala/wav2vec2-large-xlsr-catala
|
softcatala
| 2022-02-08T00:23:02Z | 82,658 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ca",
"dataset:common_voice",
"dataset:parlament_parla",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ca
datasets:
- common_voice
- parlament_parla
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Catalan XLSR Wav2Vec2 Large
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
datasets:
- name: Common Voice ca
type: common_voice
args: ca
- name: ParlamentParla
url: https://www.openslr.org/59/
metrics:
- name: Test WER
type: wer
value: 6.92
- name: Google Crowsourced Corpus WER
type: wer
value: 12.99
- name: Audiobook “La llegenda de Sant Jordi” WER
type: wer
value: 13.23
---
# Wav2Vec2-Large-XLSR-Català
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Catalan language using the [Common Voice](https://huggingface.co/datasets/common_voice) and [ParlamentParla](https://www.openslr.org/59/) datasets.
**Attention:** The split train/dev/test used does not fully map with the CommonVoice 6.1 dataset. A custom split was used combining both the CommonVoice and ParlamentParla dataset and can be found [here](https://github.com/ccoreilly/wav2vec2-catala). Evaluating on the CV test dataset will produce a biased WER as 1144 audio files of that dataset were used in training/evaluation of this model.
WER was calculated using this [test.csv](https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv) which was not seen by the model during training/evaluation.
You can find training and evaluation scripts in the github repository [ccoreilly/wav2vec2-catala](https://github.com/ccoreilly/wav2vec2-catala)
When using this model, make sure that your speech input is sampled at 16kHz.
## Results
Word error rate was evaluated on the following datasets unseen by the model:
| Dataset | WER |
| ------- | --- |
| [Test split CV+ParlamentParla]((https://github.com/ccoreilly/wav2vec2-catala/blob/master/test.csv)) | 6.92% |
| [Google Crowsourced Corpus](https://www.openslr.org/69/) | 12.99% |
| Audiobook “La llegenda de Sant Jordi” | 13.23% |
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ca", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
model = Wav2Vec2ForCTC.from_pretrained("ccoreilly/wav2vec2-large-xlsr-catala")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
|
jgammack/MTL-distilbert-base-uncased
|
jgammack
| 2022-02-07T23:23:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MTL-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5593 | 1.0 | 99 | 2.3163 |
| 2.4346 | 2.0 | 198 | 2.2918 |
| 2.3377 | 3.0 | 297 | 2.2345 |
| 2.2953 | 4.0 | 396 | 2.1463 |
| 2.2296 | 5.0 | 495 | 2.1761 |
| 2.2235 | 6.0 | 594 | 2.0721 |
| 2.1878 | 7.0 | 693 | 2.1460 |
| 2.1569 | 8.0 | 792 | 2.0856 |
| 2.1455 | 9.0 | 891 | 2.1039 |
| 2.1391 | 10.0 | 990 | 2.1112 |
| 2.1056 | 11.0 | 1089 | 2.0694 |
| 2.1076 | 12.0 | 1188 | 2.0501 |
| 2.0919 | 13.0 | 1287 | 2.0484 |
| 2.0669 | 14.0 | 1386 | 2.0342 |
| 2.0595 | 15.0 | 1485 | 2.0802 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
microsoft/cocolm-large
|
microsoft
| 2022-02-07T22:49:54Z | 9 | 7 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2102.08473",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
This model card contains the COCO-LM model (**large++** version) proposed in [this paper](https://arxiv.org/abs/2102.08473). The official GitHub repository can be found [here](https://github.com/microsoft/COCO-LM).
# Citation
If you find this model card useful for your research, please cite the following paper:
```
@inproceedings{meng2021coco,
title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining},
author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
booktitle={NeurIPS},
year={2021}
}
```
|
jgammack/MTL-roberta-base
|
jgammack
| 2022-02-07T22:45:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MTL-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MTL-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8338 | 1.0 | 98 | 1.6750 |
| 1.7732 | 2.0 | 196 | 1.6229 |
| 1.7208 | 3.0 | 294 | 1.6131 |
| 1.6917 | 4.0 | 392 | 1.5936 |
| 1.6579 | 5.0 | 490 | 1.6183 |
| 1.6246 | 6.0 | 588 | 1.6015 |
| 1.6215 | 7.0 | 686 | 1.5248 |
| 1.5743 | 8.0 | 784 | 1.5454 |
| 1.5621 | 9.0 | 882 | 1.5925 |
| 1.5652 | 10.0 | 980 | 1.5213 |
| 1.5615 | 11.0 | 1078 | 1.4845 |
| 1.5349 | 12.0 | 1176 | 1.5443 |
| 1.5165 | 13.0 | 1274 | 1.5304 |
| 1.5164 | 14.0 | 1372 | 1.4773 |
| 1.5293 | 15.0 | 1470 | 1.5537 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/SAE-roberta-base
|
jgammack
| 2022-02-07T22:14:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: SAE-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAE-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9847 | 1.0 | 79 | 1.8238 |
| 1.9142 | 2.0 | 158 | 1.8299 |
| 1.8613 | 3.0 | 237 | 1.7636 |
| 1.8384 | 4.0 | 316 | 1.8048 |
| 1.8193 | 5.0 | 395 | 1.7734 |
| 1.7985 | 6.0 | 474 | 1.7271 |
| 1.7758 | 7.0 | 553 | 1.8525 |
| 1.7611 | 8.0 | 632 | 1.7716 |
| 1.7599 | 9.0 | 711 | 1.7913 |
| 1.7118 | 10.0 | 790 | 1.7578 |
| 1.7003 | 11.0 | 869 | 1.7598 |
| 1.7072 | 12.0 | 948 | 1.6942 |
| 1.6511 | 13.0 | 1027 | 1.6955 |
| 1.6802 | 14.0 | 1106 | 1.7837 |
| 1.7048 | 15.0 | 1185 | 1.7377 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
robot-test/old-clip-tokenizer
|
robot-test
| 2022-02-07T21:44:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
Old version of the CLIP fast tokenizer
cf [this issue](https://github.com/huggingface/transformers/issues/12648) on transformers
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.