Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | bigscience/tr4c-1B3-rotary-oscar-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr5a-1B3-multilingual-mt5tok-checkpoints | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr5a-1B3-multilingual-mt5tok-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr5b-1B3-multilingual-alpha-checkpoints | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr5b-1B3-multilingual-alpha-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr5c-1B3-multilingual-alpha-alibi-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6-1B3-prefix-lm-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6-1B3-prefix-lm-unbiased-loss-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6b-350M-prefix-lm-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6c-350M-prefix-lm-reset-attention-mask-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6d-350M-prefix-lm-pile-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6e-1B3-pile-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6f-1B3-oscar-no-loss-on-targets-only-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr6g-1B3-oscar-loss-reweighting-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7a-1B3-alibi-checkpoints | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7a-1B3-alibi-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7b-350M-alibi-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7b-350M-validation-alibi-tensorboard | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7c-1B3-alibi-checkpoints | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7c-1B3-alibi-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7d-1B3-alibi-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr7d-a100-debug | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8-104B-data | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8-104B-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug2 | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug4 | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug5 | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug6 | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-debug7 | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr8b-104B-pile-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr9-1B3-swiglu-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr9b-350M-swiglu-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience/tr9c-1B3-swiglu-pile-logs | null | [
"tensorboard",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-bot/tr5-1B3-multilingual-logs | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-nfkc-250k | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | You need a custom version of the `tokenizers` library to use this tokenizer.
To install this custom version you can:
```bash
pip install transformers
git clone https://github.com/huggingface/tokenizers.git
cd tokenizers
git checkout bigscience_fork
cd bindings/python
pip install setuptools_rust
pip install -e .
```
and then to load it, do:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k-whitespace-and-eos-regex-alpha-v3-dedup-lines-articles")
``` | {} | bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k-whitespace-and-eos-regex-alpha-v3-dedup-lines-articles | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-catalogue-data-dev/test_fix_empty_string_replacement | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_alpha_NFKC_250k | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_alpha_nfkc_24M | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_alpha_weight_NFKC | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_equal_NFKC_250k | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_equal_nfkc_24M | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_equal_weight_NFKC_v1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bigscience-data/tokenizer_v0 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sapbert-from-pubmedbert-squad2
This model is a fine-tuned version of [cambridgeltl/SapBERT-from-PubMedBERT-fulltext](https://huggingface.co/cambridgeltl/SapBERT-from-PubMedBERT-fulltext) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.035 | 1.0 | 8298 | 0.9545 |
| 0.8053 | 2.0 | 16596 | 0.9988 |
| 0.5949 | 3.0 | 24894 | 0.9909 |
| 0.4878 | 4.0 | 33192 | 1.1428 |
| 0.3932 | 5.0 | 41490 | 1.2582 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.8.0
- Datasets 1.4.1
- Tokenizers 0.10.2
| {"datasets": ["squad_v2"], "model_index": [{"name": "sapbert-from-pubmedbert-squad2", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "args": "squad_v2"}}]}]} | bigwiz83/sapbert-from-pubmedbert-squad2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | biinii/1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | bill/bert_finetuning_test1 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bill/bert_finetuning_test2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | billji/jingli | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | billpr/v1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bilzjkhan/model1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | binarymax/roberta-base-squad2-outdoors | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bingogo/test_torch_model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | test1 | {} | bingzhen/test1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers | {} | binwang/bert-base-nli-stsb | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | binwang/bert-base-nli | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | binwang/bert-base-uncased | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | binwang/bert-large-nli-stsb | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | binwang/bert-large-nli | null | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | binwang/roberta-base | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | This model is pre-trained **XLNET** with 12 layers.
It comes with paper: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models
Project Page: [SBERT-WK](https://github.com/BinWang28/SBERT-WK-Sentence-Embedding)
| {} | binwang/xlnet-base-cased | null | [
"transformers",
"pytorch",
"safetensors",
"xlnet",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers | [bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [BC2GM](https://doi.org/10.1186/gb-2008-9-s2-s2) dataset for 10 epochs.
This fine-tuned model can be used for NER for genes/proteins. | {"language": ["en"], "license": "apache-2.0", "pipeline_tag": "token-classification"} | bioformers/bioformer-8L-bc2gm | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | [bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [MNLI](https://cims.nyu.edu/~sbowman/multinli/) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.803973
## Speed
In our experiments, the inference speed of Bioformer is 3x as fast as BERT-base/BioBERT/PubMedBERT, and is 40% faster than DistilBERT.
## More information
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. (source: https://huggingface.co/datasets/glue) | {} | bioformers/bioformer-8L-mnli | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
[bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [NCBI Disease](https://doi.org/10.1016/j.jbi.2013.12.006) dataset for 10 epochs.
This fine-tuned model can be used for NER for diseases.
| {"language": ["en"], "license": "apache-2.0"} | bioformers/bioformer-8L-ncbi-disease | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | [bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on two NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=16
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.883397
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli)
Original GLUE paper: https://arxiv.org/abs/1804.07461 | {"language": ["en"], "license": "apache-2.0"} | bioformers/bioformer-8L-qnli | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1804.07461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
question-answering | transformers | [bioformer-8L](https://huggingface.co/bioformers/bioformer-8L) fined-tuned on the [SQuAD1](https://rajpurkar.github.io/SQuAD-explorer) dataset for 3 epochs.
The fine-tuning process was performed on a single P100 GPUs (16GB). The hyperparameters are:
```
max_seq_length=512
per_device_train_batch_size=16
gradient_accumulation_steps=1
total train batch size (w. parallel, distributed & accumulation) = 16
learning_rate=3e-5
num_train_epochs=3
```
## Evaluation results
```
"eval_exact_match": 78.55250709555345
"eval_f1": 85.91482799690257
```
Bioformer's performance is on par with [DistilBERT](https://arxiv.org/pdf/1910.01108.pdf) (EM/F1: 77.7/85.8),
although Bioformer was pretrained only on biomedical texts.
## Speed
In our experiments, the inference speed of Bioformer is 3x as fast as BERT-base/BioBERT/PubMedBERT, and is 40% faster than DistilBERT. | {"language": ["en"], "license": "apache-2.0", "pipeline_tag": "question-answering"} | bioformers/bioformer-8L-squad1 | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
**_NOTE: `bioformer-cased-v1.0` has been renamed to `bioformer-8L`. All links to `bioformer-cased-v1.0` will automatically redirect to `bioformer-8L`, including git operations. However, to avoid confusion, we recommend updating any existing local clones to point to the new repository URL._**
Bioformer-8L is a lightweight BERT model for biomedical text mining. Bioformer-8L uses a biomedical vocabulary and is pre-trained from scratch only on biomedical domain corpora. Our experiments show that Bioformer-8L is 3x as fast as BERT-base, and achieves comparable or even better performance than BioBERT/PubMedBERT on downstream NLP tasks.
Bioformer-8L has 8 layers (transformer blocks) with a hidden embedding size of 512, and the number of self-attention heads is 8. Its total number of parameters is 42,820,610.
**The usage of Bioformer-8L is the same as a standard BERT model. The documentation of BERT can be found [here](https://huggingface.co/docs/transformers/model_doc/bert).**
## Vocabulary of Bioformer-8L
Bioformer-8L uses a cased WordPiece vocabulary trained from a biomedical corpus, which included all PubMed abstracts (33 million, as of Feb 1, 2021) and 1 million PMC full-text articles. PMC has 3.6 million articles but we down-sampled them to 1 million such that the total size of PubMed abstracts and PMC full-text articles are approximately equal. To mitigate the out-of-vocabulary issue and include special symbols (e.g. male and female symbols) in biomedical literature, we trained Bioformer’s vocabulary from the Unicode text of the two resources. The vocabulary size of Bioformer-8L is 32768 (2^15), which is similar to that of the original BERT.
## Pre-training of Bioformer-8L
Bioformer-8L was pre-trained from scratch on the same corpus as the vocabulary (33 million PubMed abstracts + 1 million PMC full-text articles). For the masked language modeling (MLM) objective, we used whole-word masking with a masking rate of 15%. There are debates on whether the next sentence prediction (NSP) objective could improve the performance on downstream tasks. We include it in our pre-training experiment in case the prediction of the next sentence is needed by end-users. Sentence segmentation of all training text was performed using [SciSpacy](https://allenai.github.io/scispacy/).
Pre-training of Bioformer-8L was performed on a single Cloud TPU device (TPUv2, 8 cores, 8GB memory per core). The maximum input sequence length was fixed to 512, and the batch size was set to 256. We pre-trained Bioformer-8L for 2 million steps, which took about 8.3 days.
## Usage
Prerequisites: python3, pytorch, transformers and datasets
We have tested the following commands on Python v3.9.16, PyTorch v1.13.1+cu117, Datasets v2.9.0 and Transformers v4.26.
To install pytorch, please refer to instructions [here](https://pytorch.org/get-started/locally).
To install the `transformers` and `datasets` library:
```
pip install transformers
pip install datasets
```
### Filling mask
```
from transformers import pipeline
unmasker8L = pipeline('fill-mask', model='bioformers/bioformer-8L')
unmasker8L("[MASK] refers to a group of diseases that affect how the body uses blood sugar (glucose)")
unmasker16L = pipeline('fill-mask', model='bioformers/bioformer-16L')
unmasker16L("[MASK] refers to a group of diseases that affect how the body uses blood sugar (glucose)")
```
Output of `bioformer-8L`:
```
[{'score': 0.3207533359527588,
'token': 13473,
'token_str': 'Diabetes',
'sequence': 'Diabetes refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.19234347343444824,
'token': 17740,
'token_str': 'Obesity',
'sequence': 'Obesity refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.09200277179479599,
'token': 10778,
'token_str': 'T2DM',
'sequence': 'T2DM refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.08494312316179276,
'token': 2228,
'token_str': 'It',
'sequence': 'It refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.0412776917219162,
'token': 22263,
'token_str':
'Hypertension',
'sequence': 'Hypertension refers to a group of diseases that affect how the body uses blood sugar ( glucose )'}]
```
Output of `bioformer-16L`:
```
[{'score': 0.7262957692146301,
'token': 13473,
'token_str': 'Diabetes',
'sequence': 'Diabetes refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.124954953789711,
'token': 10778,
'token_str': 'T2DM',
'sequence': 'T2DM refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.04062706232070923,
'token': 2228,
'token_str': 'It',
'sequence': 'It refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.022694870829582214,
'token': 17740,
'token_str': 'Obesity',
'sequence': 'Obesity refers to a group of diseases that affect how the body uses blood sugar ( glucose )'},
{'score': 0.009743048809468746,
'token': 13960,
'token_str': 'T2D',
'sequence': 'T2D refers to a group of diseases that affect how the body uses blood sugar ( glucose )'}]
```
## Awards
Bioformer-8L achieved top performance (highest micro-F1 score) in the BioCreative VII COVID-19 multi-label topic classification challenge (https://doi.org/10.1093/database/baac069)
## Links
[Bioformer-16L](https://huggingface.co/bioformers/bioformer-16L)
## Acknowledgment
Training and evaluation of Bioformer-8L is supported by the Google TPU Research Cloud (TRC) program, the Intramural Research Program of the National Library of Medicine (NLM), National Institutes of Health (NIH), and NIH/NLM grants LM012895 and 1K99LM014024-01.
## Questions
If you have any questions, please submit an issue here: https://github.com/WGLab/bioformer/issues
You can also send an email to Li Fang ([email protected], https://fangli80.github.io/).
## Citation
You can cite our preprint on arXiv:
Fang L, Chen Q, Wei C-H, Lu Z, Wang K: Bioformer: an efficient transformer language model for biomedical text mining. arXiv preprint arXiv:2302.01588 (2023). DOI: https://doi.org/10.48550/arXiv.2302.01588
BibTeX format:
```
@ARTICLE{fangli2023bioformer,
author = {{Fang}, Li and {Chen}, Qingyu and {Wei}, Chih-Hsuan and {Lu}, Zhiyong and {Wang}, Kai},
title = "{Bioformer: an efficient transformer language model for biomedical text mining}",
journal = {arXiv preprint arXiv:2302.01588},
year = {2023}
}
``` | {"language": ["en"], "license": "apache-2.0", "pipeline_tag": "fill-mask"} | bioformers/bioformer-8L | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"fill-mask",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers |
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
| {"language": ["en"], "license": "cc0-1.0", "tags": ["bert", "bluebert"], "datasets": ["PubMed", "MIMIC-III"]} | bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers |
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
| {"language": ["en"], "license": "cc0-1.0", "tags": ["bert", "bluebert"], "datasets": ["PubMed", "MIMIC-III"]} | bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers |
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
| {"language": ["en"], "license": "cc0-1.0", "tags": ["bluebert"], "datasets": ["pubmed"]} | bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12 | null | [
"transformers",
"pytorch",
"bluebert",
"en",
"dataset:pubmed",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | transformers |
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts.
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
| {"language": ["en"], "license": "cc0-1.0", "tags": ["bert", "bluebert"], "datasets": ["PubMed"]} | bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"bluebert",
"en",
"dataset:PubMed",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | ## Malayalam news classifier
### Overview
This model is trained on top of [MalayalamBert](https://huggingface.co/eliasedwin7/MalayalamBERT) for the task of classifying malayalam news headlines. Presently, the following news categories are supported:
* Business
* Sports
* Entertainment
### Dataset
The dataset used for training this model can be found [here](https://www.kaggle.com/disisbig/malyalam-news-dataset).
### Using the model with HF pipeline
```python
from transformers import pipeline
news_headline = "ക്രിപ്റ്റോ ഇടപാടുകളുടെ വിവരങ്ങൾ ആവശ്യപ്പെട്ട് ആദായനികുതി വകുപ്പ് നോട്ടീസയച്ചു"
model = pipeline(task="text-classification", model="bipin/malayalam-news-classifier")
model(news_headline)
# Output
# [{'label': 'business', 'score': 0.9979357123374939}]
```
### Contact
For feedback and questions, feel free to contact via twitter [@bkrish_](https://twitter.com/bkrish_) | {"license": "mit", "tags": ["text-classification", "roberta", "malayalam", "pytorch"], "widget": [{"text": "2032 \u0d12\u0d33\u0d3f\u0d2e\u0d4d\u0d2a\u0d3f\u0d15\u0d4d\u200c\u0d38\u0d3f\u0d28\u0d4d \u0d2c\u0d4d\u0d30\u0d3f\u0d38\u0d4d\u200c\u0d2c\u0d46\u0d2f\u0d4d\u0d28\u0d4d\u200d \u0d35\u0d47\u0d26\u0d3f\u0d2f\u0d3e\u0d15\u0d41\u0d02; \u0d17\u0d46\u0d2f\u0d3f\u0d02\u0d38\u0d3f\u0d28\u0d4d \u0d35\u0d47\u0d26\u0d3f\u0d2f\u0d3e\u0d15\u0d41\u0d28\u0d4d\u0d28 \u0d2e\u0d42\u0d28\u0d4d\u0d28\u0d3e\u0d2e\u0d24\u0d4d\u0d24\u0d46 \u0d13\u0d38\u0d4d\u200c\u0d1f\u0d4d\u0d30\u0d47\u0d32\u0d3f\u0d2f\u0d28\u0d4d\u200d \u0d28\u0d17\u0d30\u0d02"}]} | bipin/malayalam-news-classifier | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"malayalam",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | birdwatcher/DialoGPT-small-KIWI | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers | # Wav2vec 2.0 large VoxRex Swedish (C)
Experiment with LM model.
**Disclaimer:** This is a work in progress. See [VoxRex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) for more details.
**Update 2022-01-10:** Updated to VoxRex-C version.
Finetuned version of KBs [VoxRex large](https://huggingface.co/KBLab/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **2.5%**. WER for Common Voice test set is **8.49%** directly and **7.37%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
# Performance\*

<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>
## Training
This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.

## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
| {"language": "sv", "license": "cc0-1.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice", "NST Swedish ASR Database", "P4"], "metrics": ["wer"], "model-index": [{"name": "Wav2vec 2.0 large VoxRex Swedish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 9.914, "name": "Test WER"}]}]}]} | birgermoell/lm-swedish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"sv",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-swedish-wikiann
This model is a fine-tuned version of [nordic-roberta-wiki](hhttps://huggingface.co/flax-community/nordic-roberta-wiki) trained for NER on the wikiann dataset.
eval F1-Score: **83,78**
test F1-Score: **83,76**
## Model Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
```
<!--
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9086903597787154e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.3156
- Precision: 0.8332
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("birgermoell/ner-swedish-wikiann")
model = AutoModelForTokenClassification.from_pretrained("birgermoell/ner-swedish-wikiann")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jag heter Per och jag jobbar på KTH"
nlp(example)
- F1: 0.8378
- Accuracy: 0.9193
It achieves the following results on the test set:
- Loss: 0.3023
- Precision: 0.8301
- Recall: 0.8452
- F1: 0.8376
- Accuracy: 0.92
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.6.2
- Tokenizers 0.10.2
-->
| {"license": "apache-2.0", "tags": ["token-classification"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "ner-swedish-wikiann", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann"}, "metrics": [{"type": "precision", "value": 0.8331921416757433, "name": "Precision"}, {"type": "recall", "value": 0.84243586083126, "name": "Recall"}, {"type": "f1", "value": 0.8377885044416501, "name": "F1"}, {"type": "accuracy", "value": 0.91930707459758, "name": "Accuracy"}]}]}]} | birgermoell/ner-swedish-wikiann | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers | # Svensk Roberta
## Description
Swedish Roberta model trained on the MC4 dataset. The model performance needs to be assessed
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
| {"language": "sv", "license": "cc-by-4.0", "tags": ["translate"], "datasets": ["mc4"], "widget": [{"text": "Meningen med livet \u00e4r <mask>"}]} | birgermoell/roberta-swedish-scandi | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"feature-extraction",
"translate",
"sv",
"dataset:mc4",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
Swedish RoBERTa
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
| {"widget": [{"text": "Var kan jag hitta n\u00e5gon <mask> talar engelska?"}]} | birgermoell/roberta-swedish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# common-voice-vox-populi-swedish
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/common-voice-vox-populi-swedish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\
```
**Test Result**:
WER: 22.684600
| {"language": "et", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "common-voice-vox-populi-swedish by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice Vox Populi Swedish", "type": "common_voice", "args": "et"}, "metrics": [{"type": "wer", "value": 36.951816, "name": "Test WER"}]}]}]} | birgermoell/swedish-common-voice-vox-voxpopuli | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
# GPT-svenska-wikipedia
A swedish GPT2 style model trained using Flax CLM pipeline on the Swedish
part of the wiki40b dataset and the Oscar dataset.
https://huggingface.co/datasets/wiki40b
The model was trained for around 22600 steps (42 hours) as part of the Huggingface Jax/Flax challenge with the following loss and learning rate
Loss: 3.1715331077575684, Learning Rate: 0.0024816440418362617)
The model could likely be trained for longer.
## Data cleaning and preprocessing
The data was cleaned and preprocessed using the following script. Make sure to install depencies for beam_runner to make the dataset work.
```python
from datasets import load_dataset
def load_and_clean_wiki():
dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train")
#dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner')
dataset = dataset.remove_columns(['wikidata_id', 'version_id'])
filtered_dataset = dataset.map(filter_wikipedia)
# filtered_dataset[:3]
# print(filtered_dataset[:3])
return filtered_dataset
def filter_wikipedia(batch):
batch["text"] = " ".join(batch["text"].split("\
_START_SECTION_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_ARTICLE_\
"))
batch["text"] = " ".join(batch["text"].split("\
_START_PARAGRAPH_\
"))
batch["text"] = " ".join(batch["text"].split("_NEWLINE_"))
batch["text"] = " ".join(batch["text"].split("\xa0"))
return batch
```
## Training script
The following training script was used to train the model.
```bash
./run_clm_flax.py --output_dir="${MODEL_DIR}" --model_type="gpt2" --config_name="${MODEL_DIR}" --tokenizer_name="${MODEL_DIR}" --dataset_name="wiki40b" --dataset_config_name="sv" --do_train --do_eval --block_size="512" --per_device_train_batch_size="64" --per_device_eval_batch_size="64" --learning_rate="5e-3" --warmup_steps="1000" --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" --overwrite_output_dir --num_train_epochs="20" --logging_steps="500" --save_steps="1000" --eval_steps="2500" --push_to_hub
```
| {"language": "sv", "widget": [{"text": "Jag \u00e4r en svensk spr\u00e5kmodell."}]} | birgermoell/swedish-gpt | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
translation | transformers | [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/oscar)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Model series
This model is part of a series of models training on TPU with Flax Jax during Huggingface Flax/Jax challenge.
## Gpt models
## Swedish Gpt
https://huggingface.co/birgermoell/swedish-gpt/
## Swedish gpt wiki
https://huggingface.co/flax-community/swe-gpt-wiki
# Nordic gpt wiki
https://huggingface.co/flax-community/nordic-gpt-wiki
## Dansk gpt wiki
https://huggingface.co/flax-community/dansk-gpt-wiki
## Norsk gpt wiki
https://huggingface.co/flax-community/norsk-gpt-wiki
## Roberta models
## Nordic Roberta Wiki
https://huggingface.co/flax-community/nordic-roberta-wiki
## Swe Roberta Wiki Oscar
https://huggingface.co/flax-community/swe-roberta-wiki-oscar
## Roberta Swedish Scandi
https://huggingface.co/birgermoell/roberta-swedish-scandi
## Roberta Swedish
https://huggingface.co/birgermoell/roberta-swedish
## Swedish T5 model
https://huggingface.co/birgermoell/t5-base-swedish
| {"language": ["sv"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["oscar"]} | birgermoell/t5-base-swedish | null | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"feature-extraction",
"summarization",
"translation",
"sv",
"dataset:oscar",
"arxiv:1910.10683",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5528
- Wer: 0.3811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.74 | 100 | 3.4444 | 1.0 |
| No log | 1.47 | 200 | 2.9421 | 1.0 |
| No log | 2.21 | 300 | 2.2802 | 1.0137 |
| No log | 2.94 | 400 | 0.9683 | 0.7611 |
| 3.7264 | 3.68 | 500 | 0.7941 | 0.6594 |
| 3.7264 | 4.41 | 600 | 0.6695 | 0.5751 |
| 3.7264 | 5.15 | 700 | 0.6507 | 0.5314 |
| 3.7264 | 5.88 | 800 | 0.5731 | 0.4927 |
| 3.7264 | 6.62 | 900 | 0.5723 | 0.4580 |
| 0.4592 | 7.35 | 1000 | 0.5913 | 0.4479 |
| 0.4592 | 8.09 | 1100 | 0.5562 | 0.4423 |
| 0.4592 | 8.82 | 1200 | 0.5566 | 0.4292 |
| 0.4592 | 9.56 | 1300 | 0.5492 | 0.4303 |
| 0.4592 | 10.29 | 1400 | 0.5665 | 0.4331 |
| 0.2121 | 11.03 | 1500 | 0.5610 | 0.4084 |
| 0.2121 | 11.76 | 1600 | 0.5703 | 0.4014 |
| 0.2121 | 12.5 | 1700 | 0.5669 | 0.3898 |
| 0.2121 | 13.24 | 1800 | 0.5586 | 0.3962 |
| 0.2121 | 13.97 | 1900 | 0.5656 | 0.3897 |
| 0.1326 | 14.71 | 2000 | 0.5565 | 0.3813 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| {"language": ["sv-SE"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-common_voice-tr-demo", "results": []}]} | birgermoell/wav2vec2-common_voice-tr-demo | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 36.951816
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1VcWT92vBCwVn-5d-mkYxhgILPr11OHfR?usp=sharing
| {"language": "et", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Estonian by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice Estonian", "type": "common_voice", "args": "et"}, "metrics": [{"type": "wer", "value": 36.951816, "name": "Test WER"}]}]}]} | birgermoell/wav2vec2-large-xlrs-estonian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"et",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
The WER is 55.097365
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/16AyzqMWU_aWNe3IA-NxrhskB1WLPHG-Q?usp=sharing
| {"language": "fi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Finnish by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fi", "type": "common_voice", "args": "fi"}, "metrics": [{"type": "wer", "value": 55.097365, "name": "Test WER"}]}]}]} | birgermoell/wav2vec2-large-xlsr-finnish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fi",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.97 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1c8LS-RP-RMukvXkpqJ9kLXRWmRKFjevs?usp=sharing)
| {"language": "hu", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Hugarian by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice hu", "type": "common_voice", "args": "hu"}, "metrics": [{"type": "wer", "value": 46.97, "name": "Test WER"}]}]}]} | birgermoell/wav2vec2-large-xlsr-hungarian | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hu",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Luganda
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 48.314356
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1ZeII36LZ5IpBrTV7kBaTVfhDqygznlmC?usp=sharing
| {"language": "lg", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Luganda by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice Luganda", "type": "common_voice", "args": "lg"}, "metrics": [{"type": "wer", "value": 48.31, "name": "Test WER"}]}]}]} | birgermoell/wav2vec2-luganda | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"lg",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-speechdat
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4578
- Wer: 0.2927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:------:|
| No log | 0.01 | 100 | 3.6252 | 1.0 |
| No log | 0.02 | 200 | 3.1906 | 1.0 |
| No log | 0.03 | 300 | 3.1090 | 1.0 |
| No log | 0.04 | 400 | 1.8796 | 0.9955 |
| 6.2575 | 0.05 | 500 | 1.3515 | 0.9058 |
| 6.2575 | 0.06 | 600 | 1.1209 | 0.8328 |
| 6.2575 | 0.07 | 700 | 1.1404 | 0.8309 |
| 6.2575 | 0.09 | 800 | 1.0599 | 0.8021 |
| 6.2575 | 0.1 | 900 | 0.9901 | 0.8335 |
| 0.7737 | 0.11 | 1000 | 0.8846 | 0.7400 |
| 0.7737 | 0.12 | 1100 | 0.9971 | 0.7820 |
| 0.7737 | 0.13 | 1200 | 0.8665 | 0.7123 |
| 0.7737 | 0.14 | 1300 | 0.8490 | 0.7366 |
| 0.7737 | 0.15 | 1400 | 0.8250 | 0.6765 |
| 0.6183 | 0.16 | 1500 | 0.8291 | 0.6965 |
| 0.6183 | 0.17 | 1600 | 0.7946 | 0.6823 |
| 0.6183 | 0.18 | 1700 | 0.8239 | 0.6894 |
| 0.6183 | 0.19 | 1800 | 0.8282 | 0.6796 |
| 0.6183 | 0.2 | 1900 | 0.7645 | 0.6518 |
| 0.561 | 0.21 | 2000 | 0.7530 | 0.6367 |
| 0.561 | 0.22 | 2100 | 0.7296 | 0.6177 |
| 0.561 | 0.24 | 2200 | 0.7527 | 0.6498 |
| 0.561 | 0.25 | 2300 | 0.7210 | 0.6316 |
| 0.561 | 0.26 | 2400 | 0.7938 | 0.6757 |
| 0.5402 | 0.27 | 2500 | 0.7485 | 0.6372 |
| 0.5402 | 0.28 | 2600 | 0.7146 | 0.6133 |
| 0.5402 | 0.29 | 2700 | 0.7308 | 0.6626 |
| 0.5402 | 0.3 | 2800 | 0.7078 | 0.5949 |
| 0.5402 | 0.31 | 2900 | 0.7679 | 0.6373 |
| 0.5303 | 0.32 | 3000 | 0.7263 | 0.6502 |
| 0.5303 | 0.33 | 3100 | 0.6613 | 0.5846 |
| 0.5303 | 0.34 | 3200 | 0.6784 | 0.5783 |
| 0.5303 | 0.35 | 3300 | 0.6908 | 0.5833 |
| 0.5303 | 0.36 | 3400 | 0.6595 | 0.5826 |
| 0.503 | 0.37 | 3500 | 0.6717 | 0.5938 |
| 0.503 | 0.39 | 3600 | 0.6938 | 0.5791 |
| 0.503 | 0.4 | 3700 | 0.6677 | 0.6052 |
| 0.503 | 0.41 | 3800 | 0.6544 | 0.5554 |
| 0.503 | 0.42 | 3900 | 0.6514 | 0.5728 |
| 0.4959 | 0.43 | 4000 | 0.6847 | 0.6188 |
| 0.4959 | 0.44 | 4100 | 0.6626 | 0.5869 |
| 0.4959 | 0.45 | 4200 | 0.6670 | 0.5700 |
| 0.4959 | 0.46 | 4300 | 0.6596 | 0.5846 |
| 0.4959 | 0.47 | 4400 | 0.6523 | 0.5468 |
| 0.4824 | 0.48 | 4500 | 0.6392 | 0.5688 |
| 0.4824 | 0.49 | 4600 | 0.6561 | 0.5687 |
| 0.4824 | 0.5 | 4700 | 0.6697 | 0.5817 |
| 0.4824 | 0.51 | 4800 | 0.6348 | 0.5608 |
| 0.4824 | 0.52 | 4900 | 0.6561 | 0.5600 |
| 0.4714 | 0.54 | 5000 | 0.6522 | 0.6181 |
| 0.4714 | 0.55 | 5100 | 0.6858 | 0.5921 |
| 0.4714 | 0.56 | 5200 | 0.6706 | 0.5497 |
| 0.4714 | 0.57 | 5300 | 0.7123 | 0.5768 |
| 0.4714 | 0.58 | 5400 | 0.6599 | 0.6100 |
| 0.471 | 0.59 | 5500 | 0.6421 | 0.5626 |
| 0.471 | 0.6 | 5600 | 0.6395 | 0.5753 |
| 0.471 | 0.61 | 5700 | 0.6788 | 0.5481 |
| 0.471 | 0.62 | 5800 | 0.6386 | 0.5516 |
| 0.471 | 0.63 | 5900 | 0.6694 | 0.5913 |
| 0.4707 | 0.64 | 6000 | 0.6251 | 0.5699 |
| 0.4707 | 0.65 | 6100 | 0.6243 | 0.5567 |
| 0.4707 | 0.66 | 6200 | 0.6645 | 0.5629 |
| 0.4707 | 0.67 | 6300 | 0.6296 | 0.5895 |
| 0.4707 | 0.69 | 6400 | 0.6078 | 0.5183 |
| 0.4632 | 0.7 | 6500 | 0.6270 | 0.5619 |
| 0.4632 | 0.71 | 6600 | 0.6050 | 0.5336 |
| 0.4632 | 0.72 | 6700 | 0.6185 | 0.5449 |
| 0.4632 | 0.73 | 6800 | 0.6281 | 0.5645 |
| 0.4632 | 0.74 | 6900 | 0.5877 | 0.5084 |
| 0.4514 | 0.75 | 7000 | 0.6199 | 0.5403 |
| 0.4514 | 0.76 | 7100 | 0.6293 | 0.5275 |
| 0.4514 | 0.77 | 7200 | 0.6290 | 0.5447 |
| 0.4514 | 0.78 | 7300 | 0.6130 | 0.5373 |
| 0.4514 | 0.79 | 7400 | 0.6138 | 0.5285 |
| 0.4457 | 0.8 | 7500 | 0.6040 | 0.5259 |
| 0.4457 | 0.81 | 7600 | 0.6220 | 0.5686 |
| 0.4457 | 0.82 | 7700 | 0.5915 | 0.5164 |
| 0.4457 | 0.84 | 7800 | 0.6270 | 0.5289 |
| 0.4457 | 0.85 | 7900 | 0.6224 | 0.5515 |
| 0.4458 | 0.86 | 8000 | 0.6161 | 0.5323 |
| 0.4458 | 0.87 | 8100 | 0.5827 | 0.5122 |
| 0.4458 | 0.88 | 8200 | 0.6067 | 0.5202 |
| 0.4458 | 0.89 | 8300 | 0.6087 | 0.5192 |
| 0.4458 | 0.9 | 8400 | 0.6859 | 0.5796 |
| 0.4409 | 0.91 | 8500 | 0.6180 | 0.5131 |
| 0.4409 | 0.92 | 8600 | 0.5945 | 0.4948 |
| 0.4409 | 0.93 | 8700 | 0.5967 | 0.5532 |
| 0.4409 | 0.94 | 8800 | 0.5770 | 0.4961 |
| 0.4409 | 0.95 | 8900 | 0.5809 | 0.5203 |
| 0.4305 | 0.96 | 9000 | 0.5805 | 0.5039 |
| 0.4305 | 0.97 | 9100 | 0.5873 | 0.5188 |
| 0.4305 | 0.98 | 9200 | 0.6277 | 0.5516 |
| 0.4305 | 1.0 | 9300 | 0.5727 | 0.5052 |
| 0.4305 | 1.01 | 9400 | 0.5858 | 0.5123 |
| 0.4264 | 1.02 | 9500 | 0.5692 | 0.4968 |
| 0.4264 | 1.03 | 9600 | 0.5954 | 0.5117 |
| 0.4264 | 1.04 | 9700 | 0.5904 | 0.5076 |
| 0.4264 | 1.05 | 9800 | 0.6046 | 0.5101 |
| 0.4264 | 1.06 | 9900 | 0.5616 | 0.4926 |
| 0.4176 | 1.07 | 10000 | 0.5971 | 0.5368 |
| 0.4176 | 1.08 | 10100 | 0.5706 | 0.4940 |
| 0.4176 | 1.09 | 10200 | 0.5612 | 0.5032 |
| 0.4176 | 1.1 | 10300 | 0.5672 | 0.4944 |
| 0.4176 | 1.11 | 10400 | 0.5915 | 0.5218 |
| 0.4033 | 1.12 | 10500 | 0.5706 | 0.5051 |
| 0.4033 | 1.13 | 10600 | 0.5661 | 0.4934 |
| 0.4033 | 1.15 | 10700 | 0.5724 | 0.4903 |
| 0.4033 | 1.16 | 10800 | 0.5792 | 0.4940 |
| 0.4033 | 1.17 | 10900 | 0.5744 | 0.4911 |
| 0.392 | 1.18 | 11000 | 0.5767 | 0.5162 |
| 0.392 | 1.19 | 11100 | 0.5588 | 0.4835 |
| 0.392 | 1.2 | 11200 | 0.5609 | 0.4922 |
| 0.392 | 1.21 | 11300 | 0.5890 | 0.4914 |
| 0.392 | 1.22 | 11400 | 0.5525 | 0.4897 |
| 0.387 | 1.23 | 11500 | 0.5704 | 0.5051 |
| 0.387 | 1.24 | 11600 | 0.5539 | 0.5014 |
| 0.387 | 1.25 | 11700 | 0.5473 | 0.4882 |
| 0.387 | 1.26 | 11800 | 0.5662 | 0.5004 |
| 0.387 | 1.27 | 11900 | 0.5785 | 0.5220 |
| 0.3956 | 1.28 | 12000 | 0.5990 | 0.5114 |
| 0.3956 | 1.3 | 12100 | 0.5497 | 0.4895 |
| 0.3956 | 1.31 | 12200 | 0.5538 | 0.4895 |
| 0.3956 | 1.32 | 12300 | 0.5652 | 0.4913 |
| 0.3956 | 1.33 | 12400 | 0.5682 | 0.5128 |
| 0.4043 | 1.34 | 12500 | 0.5830 | 0.4999 |
| 0.4043 | 1.35 | 12600 | 0.5686 | 0.4865 |
| 0.4043 | 1.36 | 12700 | 0.5688 | 0.4937 |
| 0.4043 | 1.37 | 12800 | 0.5753 | 0.5034 |
| 0.4043 | 1.38 | 12900 | 0.5898 | 0.4865 |
| 0.3997 | 1.39 | 13000 | 0.5723 | 0.4963 |
| 0.3997 | 1.4 | 13100 | 0.5767 | 0.4986 |
| 0.3997 | 1.41 | 13200 | 0.5960 | 0.5084 |
| 0.3997 | 1.42 | 13300 | 0.5859 | 0.5096 |
| 0.3997 | 1.43 | 13400 | 0.5491 | 0.4784 |
| 0.3997 | 1.45 | 13500 | 0.5636 | 0.5049 |
| 0.3997 | 1.46 | 13600 | 0.5667 | 0.4708 |
| 0.3997 | 1.47 | 13700 | 0.5757 | 0.4862 |
| 0.3997 | 1.48 | 13800 | 0.5444 | 0.4816 |
| 0.3997 | 1.49 | 13900 | 0.5557 | 0.4792 |
| 0.3954 | 1.5 | 14000 | 0.5437 | 0.4810 |
| 0.3954 | 1.51 | 14100 | 0.5489 | 0.4674 |
| 0.3954 | 1.52 | 14200 | 0.5415 | 0.4674 |
| 0.3954 | 1.53 | 14300 | 0.5481 | 0.4902 |
| 0.3954 | 1.54 | 14400 | 0.5474 | 0.4763 |
| 0.3814 | 1.55 | 14500 | 0.5588 | 0.4731 |
| 0.3814 | 1.56 | 14600 | 0.5746 | 0.4820 |
| 0.3814 | 1.57 | 14700 | 0.5676 | 0.4884 |
| 0.3814 | 1.58 | 14800 | 0.5495 | 0.4711 |
| 0.3814 | 1.6 | 14900 | 0.5565 | 0.4782 |
| 0.3877 | 1.61 | 15000 | 0.5671 | 0.5135 |
| 0.3877 | 1.62 | 15100 | 0.5512 | 0.4868 |
| 0.3877 | 1.63 | 15200 | 0.5683 | 0.4650 |
| 0.3877 | 1.64 | 15300 | 0.5427 | 0.4717 |
| 0.3877 | 1.65 | 15400 | 0.5519 | 0.4651 |
| 0.387 | 1.66 | 15500 | 0.5327 | 0.4456 |
| 0.387 | 1.67 | 15600 | 0.5371 | 0.4673 |
| 0.387 | 1.68 | 15700 | 0.5337 | 0.4705 |
| 0.387 | 1.69 | 15800 | 0.5606 | 0.4992 |
| 0.387 | 1.7 | 15900 | 0.5254 | 0.4613 |
| 0.3877 | 1.71 | 16000 | 0.5619 | 0.4882 |
| 0.3877 | 1.72 | 16100 | 0.5212 | 0.4560 |
| 0.3877 | 1.73 | 16200 | 0.5369 | 0.4696 |
| 0.3877 | 1.75 | 16300 | 0.5392 | 0.4677 |
| 0.3877 | 1.76 | 16400 | 0.5353 | 0.4768 |
| 0.3739 | 1.77 | 16500 | 0.5435 | 0.4777 |
| 0.3739 | 1.78 | 16600 | 0.5343 | 0.4884 |
| 0.3739 | 1.79 | 16700 | 0.5309 | 0.4942 |
| 0.3739 | 1.8 | 16800 | 0.5373 | 0.4727 |
| 0.3739 | 1.81 | 16900 | 0.5550 | 0.4686 |
| 0.3884 | 1.82 | 17000 | 0.5486 | 0.4826 |
| 0.3884 | 1.83 | 17100 | 0.5508 | 0.4862 |
| 0.3884 | 1.84 | 17200 | 0.5423 | 0.4855 |
| 0.3884 | 1.85 | 17300 | 0.5478 | 0.4730 |
| 0.3884 | 1.86 | 17400 | 0.5438 | 0.4938 |
| 0.3842 | 1.87 | 17500 | 0.5571 | 0.4818 |
| 0.3842 | 1.88 | 17600 | 0.5402 | 0.4753 |
| 0.3842 | 1.9 | 17700 | 0.5679 | 0.4827 |
| 0.3842 | 1.91 | 17800 | 0.5385 | 0.4642 |
| 0.3842 | 1.92 | 17900 | 0.5519 | 0.4942 |
| 0.3953 | 1.93 | 18000 | 0.5559 | 0.4745 |
| 0.3953 | 1.94 | 18100 | 0.5657 | 0.4963 |
| 0.3953 | 1.95 | 18200 | 0.5296 | 0.4642 |
| 0.3953 | 1.96 | 18300 | 0.5529 | 0.4907 |
| 0.3953 | 1.97 | 18400 | 0.5380 | 0.4536 |
| 0.3745 | 1.98 | 18500 | 0.5276 | 0.4678 |
| 0.3745 | 1.99 | 18600 | 0.5544 | 0.4854 |
| 0.3745 | 2.0 | 18700 | 0.5195 | 0.4535 |
| 0.3745 | 2.01 | 18800 | 0.5165 | 0.4635 |
| 0.3745 | 2.02 | 18900 | 0.5062 | 0.4431 |
| 0.3538 | 2.03 | 19000 | 0.5255 | 0.4509 |
| 0.3538 | 2.04 | 19100 | 0.5125 | 0.4512 |
| 0.3538 | 2.06 | 19200 | 0.5105 | 0.4504 |
| 0.3538 | 2.07 | 19300 | 0.5000 | 0.4490 |
| 0.3538 | 2.08 | 19400 | 0.5150 | 0.4520 |
| 0.356 | 2.09 | 19500 | 0.5053 | 0.4383 |
| 0.356 | 2.1 | 19600 | 0.5085 | 0.4417 |
| 0.356 | 2.11 | 19700 | 0.5229 | 0.4490 |
| 0.356 | 2.12 | 19800 | 0.5326 | 0.4492 |
| 0.356 | 2.13 | 19900 | 0.5139 | 0.4491 |
| 0.3474 | 2.14 | 20000 | 0.5134 | 0.4384 |
| 0.3474 | 2.15 | 20100 | 0.5498 | 0.4606 |
| 0.3474 | 2.16 | 20200 | 0.5324 | 0.4540 |
| 0.3474 | 2.17 | 20300 | 0.5338 | 0.4548 |
| 0.3474 | 2.18 | 20400 | 0.5076 | 0.4425 |
| 0.345 | 2.19 | 20500 | 0.5253 | 0.4550 |
| 0.345 | 2.21 | 20600 | 0.5125 | 0.4618 |
| 0.345 | 2.22 | 20700 | 0.5171 | 0.4487 |
| 0.345 | 2.23 | 20800 | 0.5232 | 0.4464 |
| 0.345 | 2.24 | 20900 | 0.5298 | 0.4588 |
| 0.341 | 2.25 | 21000 | 0.5342 | 0.4576 |
| 0.341 | 2.26 | 21100 | 0.5515 | 0.4678 |
| 0.341 | 2.27 | 21200 | 0.5041 | 0.4495 |
| 0.341 | 2.28 | 21300 | 0.5169 | 0.4473 |
| 0.341 | 2.29 | 21400 | 0.5227 | 0.4494 |
| 0.354 | 2.3 | 21500 | 0.5214 | 0.4458 |
| 0.354 | 2.31 | 21600 | 0.5303 | 0.4587 |
| 0.354 | 2.32 | 21700 | 0.5237 | 0.4597 |
| 0.354 | 2.33 | 21800 | 0.5067 | 0.4460 |
| 0.354 | 2.34 | 21900 | 0.5117 | 0.4560 |
| 0.3333 | 2.36 | 22000 | 0.5104 | 0.4359 |
| 0.3333 | 2.37 | 22100 | 0.5326 | 0.4679 |
| 0.3333 | 2.38 | 22200 | 0.5098 | 0.4510 |
| 0.3333 | 2.39 | 22300 | 0.5044 | 0.4445 |
| 0.3333 | 2.4 | 22400 | 0.5219 | 0.4489 |
| 0.3514 | 2.41 | 22500 | 0.4987 | 0.4433 |
| 0.3514 | 2.42 | 22600 | 0.5009 | 0.4338 |
| 0.3514 | 2.43 | 22700 | 0.5252 | 0.4444 |
| 0.3514 | 2.44 | 22800 | 0.4861 | 0.4269 |
| 0.3514 | 2.45 | 22900 | 0.5157 | 0.4421 |
| 0.3444 | 2.46 | 23000 | 0.5277 | 0.4426 |
| 0.3444 | 2.47 | 23100 | 0.5213 | 0.4378 |
| 0.3444 | 2.48 | 23200 | 0.5172 | 0.4482 |
| 0.3444 | 2.49 | 23300 | 0.5142 | 0.4376 |
| 0.3444 | 2.51 | 23400 | 0.5044 | 0.4231 |
| 0.3536 | 2.52 | 23500 | 0.5268 | 0.4496 |
| 0.3536 | 2.53 | 23600 | 0.5176 | 0.4326 |
| 0.3536 | 2.54 | 23700 | 0.5032 | 0.4296 |
| 0.3536 | 2.55 | 23800 | 0.5211 | 0.4460 |
| 0.3536 | 2.56 | 23900 | 0.5093 | 0.4379 |
| 0.337 | 2.57 | 24000 | 0.4990 | 0.4311 |
| 0.337 | 2.58 | 24100 | 0.4962 | 0.4329 |
| 0.337 | 2.59 | 24200 | 0.5033 | 0.4289 |
| 0.337 | 2.6 | 24300 | 0.5260 | 0.4534 |
| 0.337 | 2.61 | 24400 | 0.5309 | 0.4441 |
| 0.3393 | 2.62 | 24500 | 0.5132 | 0.4346 |
| 0.3393 | 2.63 | 24600 | 0.5189 | 0.4233 |
| 0.3393 | 2.64 | 24700 | 0.5074 | 0.4326 |
| 0.3393 | 2.66 | 24800 | 0.5111 | 0.4254 |
| 0.3393 | 2.67 | 24900 | 0.4933 | 0.4254 |
| 0.3334 | 2.68 | 25000 | 0.5046 | 0.4407 |
| 0.3334 | 2.69 | 25100 | 0.5010 | 0.4404 |
| 0.3334 | 2.7 | 25200 | 0.5045 | 0.4236 |
| 0.3334 | 2.71 | 25300 | 0.4938 | 0.4305 |
| 0.3334 | 2.72 | 25400 | 0.5021 | 0.4383 |
| 0.3366 | 2.73 | 25500 | 0.4953 | 0.4202 |
| 0.3366 | 2.74 | 25600 | 0.4985 | 0.4338 |
| 0.3366 | 2.75 | 25700 | 0.4765 | 0.4161 |
| 0.3366 | 2.76 | 25800 | 0.4873 | 0.4292 |
| 0.3366 | 2.77 | 25900 | 0.4998 | 0.4189 |
| 0.3359 | 2.78 | 26000 | 0.4991 | 0.4248 |
| 0.3359 | 2.79 | 26100 | 0.5012 | 0.4307 |
| 0.3359 | 2.81 | 26200 | 0.5081 | 0.4151 |
| 0.3359 | 2.82 | 26300 | 0.4997 | 0.4305 |
| 0.3359 | 2.83 | 26400 | 0.4969 | 0.4302 |
| 0.3396 | 2.84 | 26500 | 0.4784 | 0.4271 |
| 0.3396 | 2.85 | 26600 | 0.4804 | 0.4149 |
| 0.3396 | 2.86 | 26700 | 0.4900 | 0.4192 |
| 0.3396 | 2.87 | 26800 | 0.5044 | 0.4325 |
| 0.3396 | 2.88 | 26900 | 0.4935 | 0.4376 |
| 0.3356 | 2.89 | 27000 | 0.5007 | 0.4269 |
| 0.3356 | 2.9 | 27100 | 0.4887 | 0.4178 |
| 0.3356 | 2.91 | 27200 | 0.4770 | 0.4170 |
| 0.3356 | 2.92 | 27300 | 0.4847 | 0.4167 |
| 0.3356 | 2.93 | 27400 | 0.4861 | 0.4139 |
| 0.3395 | 2.94 | 27500 | 0.4975 | 0.4291 |
| 0.3395 | 2.95 | 27600 | 0.5056 | 0.4471 |
| 0.3395 | 2.97 | 27700 | 0.5111 | 0.4375 |
| 0.3395 | 2.98 | 27800 | 0.5327 | 0.4577 |
| 0.3395 | 2.99 | 27900 | 0.5067 | 0.4393 |
| 0.3332 | 3.0 | 28000 | 0.4898 | 0.4188 |
| 0.3332 | 3.01 | 28100 | 0.4790 | 0.4093 |
| 0.3332 | 3.02 | 28200 | 0.4828 | 0.4202 |
| 0.3332 | 3.03 | 28300 | 0.4836 | 0.4146 |
| 0.3332 | 3.04 | 28400 | 0.4901 | 0.4242 |
| 0.2984 | 3.05 | 28500 | 0.4772 | 0.4118 |
| 0.2984 | 3.06 | 28600 | 0.5055 | 0.4213 |
| 0.2984 | 3.07 | 28700 | 0.4911 | 0.4100 |
| 0.2984 | 3.08 | 28800 | 0.4737 | 0.4087 |
| 0.2984 | 3.09 | 28900 | 0.4930 | 0.4216 |
| 0.3056 | 3.1 | 29000 | 0.4736 | 0.4109 |
| 0.3056 | 3.12 | 29100 | 0.4863 | 0.4058 |
| 0.3056 | 3.13 | 29200 | 0.4784 | 0.4184 |
| 0.3056 | 3.14 | 29300 | 0.4923 | 0.4240 |
| 0.3056 | 3.15 | 29400 | 0.4846 | 0.4226 |
| 0.2995 | 3.16 | 29500 | 0.4829 | 0.4086 |
| 0.2995 | 3.17 | 29600 | 0.4934 | 0.4240 |
| 0.2995 | 3.18 | 29700 | 0.4893 | 0.4152 |
| 0.2995 | 3.19 | 29800 | 0.4730 | 0.4227 |
| 0.2995 | 3.2 | 29900 | 0.5027 | 0.4330 |
| 0.2926 | 3.21 | 30000 | 0.4903 | 0.4112 |
| 0.2926 | 3.22 | 30100 | 0.4961 | 0.4157 |
| 0.2926 | 3.23 | 30200 | 0.4980 | 0.4269 |
| 0.2926 | 3.24 | 30300 | 0.4896 | 0.4126 |
| 0.2926 | 3.25 | 30400 | 0.4726 | 0.4062 |
| 0.301 | 3.27 | 30500 | 0.4733 | 0.3985 |
| 0.301 | 3.28 | 30600 | 0.4772 | 0.4047 |
| 0.301 | 3.29 | 30700 | 0.4806 | 0.4082 |
| 0.301 | 3.3 | 30800 | 0.4683 | 0.4011 |
| 0.301 | 3.31 | 30900 | 0.4775 | 0.4079 |
| 0.2933 | 3.32 | 31000 | 0.4729 | 0.4083 |
| 0.2933 | 3.33 | 31100 | 0.4628 | 0.4016 |
| 0.2933 | 3.34 | 31200 | 0.4753 | 0.4192 |
| 0.2933 | 3.35 | 31300 | 0.4687 | 0.4185 |
| 0.2933 | 3.36 | 31400 | 0.4806 | 0.4106 |
| 0.2957 | 3.37 | 31500 | 0.4889 | 0.4240 |
| 0.2957 | 3.38 | 31600 | 0.4882 | 0.4182 |
| 0.2957 | 3.39 | 31700 | 0.4798 | 0.4162 |
| 0.2957 | 3.4 | 31800 | 0.4718 | 0.4108 |
| 0.2957 | 3.42 | 31900 | 0.4685 | 0.4101 |
| 0.3039 | 3.43 | 32000 | 0.4816 | 0.4188 |
| 0.3039 | 3.44 | 32100 | 0.4874 | 0.4139 |
| 0.3039 | 3.45 | 32200 | 0.4899 | 0.4115 |
| 0.3039 | 3.46 | 32300 | 0.4852 | 0.4180 |
| 0.3039 | 3.47 | 32400 | 0.5074 | 0.4129 |
| 0.3006 | 3.48 | 32500 | 0.4837 | 0.4076 |
| 0.3006 | 3.49 | 32600 | 0.4927 | 0.4098 |
| 0.3006 | 3.5 | 32700 | 0.4999 | 0.4172 |
| 0.3006 | 3.51 | 32800 | 0.4773 | 0.4194 |
| 0.3006 | 3.52 | 32900 | 0.4859 | 0.4058 |
| 0.3089 | 3.53 | 33000 | 0.4783 | 0.4104 |
| 0.3089 | 3.54 | 33100 | 0.4622 | 0.4020 |
| 0.3089 | 3.55 | 33200 | 0.4840 | 0.4065 |
| 0.3089 | 3.57 | 33300 | 0.4756 | 0.4241 |
| 0.3089 | 3.58 | 33400 | 0.4831 | 0.4170 |
| 0.3061 | 3.59 | 33500 | 0.4794 | 0.4068 |
| 0.3061 | 3.6 | 33600 | 0.4730 | 0.4037 |
| 0.3061 | 3.61 | 33700 | 0.4808 | 0.4138 |
| 0.3061 | 3.62 | 33800 | 0.4924 | 0.4248 |
| 0.3061 | 3.63 | 33900 | 0.4749 | 0.4112 |
| 0.3047 | 3.64 | 34000 | 0.4924 | 0.4326 |
| 0.3047 | 3.65 | 34100 | 0.4745 | 0.4104 |
| 0.3047 | 3.66 | 34200 | 0.4760 | 0.4123 |
| 0.3047 | 3.67 | 34300 | 0.4788 | 0.4066 |
| 0.3047 | 3.68 | 34400 | 0.4627 | 0.4158 |
| 0.3042 | 3.69 | 34500 | 0.4974 | 0.4131 |
| 0.3042 | 3.7 | 34600 | 0.4593 | 0.4063 |
| 0.3042 | 3.72 | 34700 | 0.4549 | 0.3928 |
| 0.3042 | 3.73 | 34800 | 0.4690 | 0.3898 |
| 0.3042 | 3.74 | 34900 | 0.4560 | 0.4007 |
| 0.2963 | 3.75 | 35000 | 0.4606 | 0.3959 |
| 0.2963 | 3.76 | 35100 | 0.4762 | 0.4057 |
| 0.2963 | 3.77 | 35200 | 0.4750 | 0.4034 |
| 0.2963 | 3.78 | 35300 | 0.4772 | 0.4114 |
| 0.2963 | 3.79 | 35400 | 0.4669 | 0.3995 |
| 0.3012 | 3.8 | 35500 | 0.4709 | 0.4090 |
| 0.3012 | 3.81 | 35600 | 0.4722 | 0.4123 |
| 0.3012 | 3.82 | 35700 | 0.4913 | 0.4165 |
| 0.3012 | 3.83 | 35800 | 0.4814 | 0.4063 |
| 0.3012 | 3.84 | 35900 | 0.4869 | 0.4171 |
| 0.3015 | 3.85 | 36000 | 0.4791 | 0.4059 |
| 0.3015 | 3.87 | 36100 | 0.4535 | 0.3976 |
| 0.3015 | 3.88 | 36200 | 0.4706 | 0.4009 |
| 0.3015 | 3.89 | 36300 | 0.4679 | 0.4012 |
| 0.3015 | 3.9 | 36400 | 0.4736 | 0.4096 |
| 0.2965 | 3.91 | 36500 | 0.4756 | 0.4106 |
| 0.2965 | 3.92 | 36600 | 0.4669 | 0.4085 |
| 0.2965 | 3.93 | 36700 | 0.4796 | 0.4054 |
| 0.2965 | 3.94 | 36800 | 0.4583 | 0.3932 |
| 0.2965 | 3.95 | 36900 | 0.4430 | 0.3969 |
| 0.2993 | 3.96 | 37000 | 0.4560 | 0.3914 |
| 0.2993 | 3.97 | 37100 | 0.4739 | 0.4002 |
| 0.2993 | 3.98 | 37200 | 0.4598 | 0.3912 |
| 0.2993 | 3.99 | 37300 | 0.4607 | 0.3907 |
| 0.2993 | 4.0 | 37400 | 0.4709 | 0.3986 |
| 0.2886 | 4.01 | 37500 | 0.4642 | 0.4067 |
| 0.2886 | 4.03 | 37600 | 0.4684 | 0.3984 |
| 0.2886 | 4.04 | 37700 | 0.4690 | 0.3979 |
| 0.2886 | 4.05 | 37800 | 0.4722 | 0.3980 |
| 0.2886 | 4.06 | 37900 | 0.4734 | 0.3927 |
| 0.2534 | 4.07 | 38000 | 0.4724 | 0.3988 |
| 0.2534 | 4.08 | 38100 | 0.4665 | 0.3986 |
| 0.2534 | 4.09 | 38200 | 0.4659 | 0.4036 |
| 0.2534 | 4.1 | 38300 | 0.4694 | 0.3952 |
| 0.2534 | 4.11 | 38400 | 0.4719 | 0.3891 |
| 0.2596 | 4.12 | 38500 | 0.4687 | 0.3994 |
| 0.2596 | 4.13 | 38600 | 0.4705 | 0.3903 |
| 0.2596 | 4.14 | 38700 | 0.4601 | 0.3975 |
| 0.2596 | 4.15 | 38800 | 0.4666 | 0.3971 |
| 0.2596 | 4.16 | 38900 | 0.4772 | 0.3892 |
| 0.2643 | 4.18 | 39000 | 0.4810 | 0.4071 |
| 0.2643 | 4.19 | 39100 | 0.4980 | 0.4167 |
| 0.2643 | 4.2 | 39200 | 0.4657 | 0.3996 |
| 0.2643 | 4.21 | 39300 | 0.4869 | 0.4002 |
| 0.2643 | 4.22 | 39400 | 0.4656 | 0.3913 |
| 0.265 | 4.23 | 39500 | 0.4720 | 0.3947 |
| 0.265 | 4.24 | 39600 | 0.4711 | 0.3970 |
| 0.265 | 4.25 | 39700 | 0.4689 | 0.3933 |
| 0.265 | 4.26 | 39800 | 0.4728 | 0.4017 |
| 0.265 | 4.27 | 39900 | 0.4673 | 0.3847 |
| 0.2644 | 4.28 | 40000 | 0.4636 | 0.3960 |
| 0.2644 | 4.29 | 40100 | 0.4699 | 0.3864 |
| 0.2644 | 4.3 | 40200 | 0.4580 | 0.3874 |
| 0.2644 | 4.31 | 40300 | 0.4763 | 0.3951 |
| 0.2644 | 4.33 | 40400 | 0.4752 | 0.4141 |
| 0.2633 | 4.34 | 40500 | 0.4918 | 0.3994 |
| 0.2633 | 4.35 | 40600 | 0.4783 | 0.4026 |
| 0.2633 | 4.36 | 40700 | 0.4739 | 0.4034 |
| 0.2633 | 4.37 | 40800 | 0.4750 | 0.4000 |
| 0.2633 | 4.38 | 40900 | 0.4608 | 0.3943 |
| 0.2679 | 4.39 | 41000 | 0.4615 | 0.3891 |
| 0.2679 | 4.4 | 41100 | 0.4730 | 0.3984 |
| 0.2679 | 4.41 | 41200 | 0.4728 | 0.4011 |
| 0.2679 | 4.42 | 41300 | 0.4675 | 0.3932 |
| 0.2679 | 4.43 | 41400 | 0.4662 | 0.3929 |
| 0.2682 | 4.44 | 41500 | 0.4490 | 0.3837 |
| 0.2682 | 4.45 | 41600 | 0.4611 | 0.3838 |
| 0.2682 | 4.46 | 41700 | 0.4605 | 0.3945 |
| 0.2682 | 4.48 | 41800 | 0.4730 | 0.3938 |
| 0.2682 | 4.49 | 41900 | 0.4567 | 0.3874 |
| 0.2658 | 4.5 | 42000 | 0.4715 | 0.3869 |
| 0.2658 | 4.51 | 42100 | 0.4514 | 0.3833 |
| 0.2658 | 4.52 | 42200 | 0.4602 | 0.3898 |
| 0.2658 | 4.53 | 42300 | 0.4846 | 0.4022 |
| 0.2658 | 4.54 | 42400 | 0.4474 | 0.3810 |
| 0.2676 | 4.55 | 42500 | 0.4513 | 0.3820 |
| 0.2676 | 4.56 | 42600 | 0.4588 | 0.3928 |
| 0.2676 | 4.57 | 42700 | 0.4601 | 0.3894 |
| 0.2676 | 4.58 | 42800 | 0.4516 | 0.3792 |
| 0.2676 | 4.59 | 42900 | 0.4482 | 0.3848 |
| 0.2693 | 4.6 | 43000 | 0.4695 | 0.4008 |
| 0.2693 | 4.61 | 43100 | 0.4580 | 0.3871 |
| 0.2693 | 4.63 | 43200 | 0.4419 | 0.3857 |
| 0.2693 | 4.64 | 43300 | 0.4534 | 0.3796 |
| 0.2693 | 4.65 | 43400 | 0.4532 | 0.3856 |
| 0.2641 | 4.66 | 43500 | 0.4421 | 0.3809 |
| 0.2641 | 4.67 | 43600 | 0.4400 | 0.3844 |
| 0.2641 | 4.68 | 43700 | 0.4515 | 0.3833 |
| 0.2641 | 4.69 | 43800 | 0.4462 | 0.3808 |
| 0.2641 | 4.7 | 43900 | 0.4741 | 0.3926 |
| 0.2626 | 4.71 | 44000 | 0.4542 | 0.3931 |
| 0.2626 | 4.72 | 44100 | 0.4555 | 0.3885 |
| 0.2626 | 4.73 | 44200 | 0.4505 | 0.3845 |
| 0.2626 | 4.74 | 44300 | 0.4593 | 0.3871 |
| 0.2626 | 4.75 | 44400 | 0.4359 | 0.3830 |
| 0.2648 | 4.76 | 44500 | 0.4387 | 0.3736 |
| 0.2648 | 4.78 | 44600 | 0.4529 | 0.3807 |
| 0.2648 | 4.79 | 44700 | 0.4566 | 0.3837 |
| 0.2648 | 4.8 | 44800 | 0.4557 | 0.4067 |
| 0.2648 | 4.81 | 44900 | 0.4609 | 0.3852 |
| 0.2603 | 4.82 | 45000 | 0.4667 | 0.4005 |
| 0.2603 | 4.83 | 45100 | 0.4666 | 0.3836 |
| 0.2603 | 4.84 | 45200 | 0.4775 | 0.3946 |
| 0.2603 | 4.85 | 45300 | 0.4701 | 0.3925 |
| 0.2603 | 4.86 | 45400 | 0.4579 | 0.3889 |
| 0.2626 | 4.87 | 45500 | 0.4516 | 0.3884 |
| 0.2626 | 4.88 | 45600 | 0.4605 | 0.3878 |
| 0.2626 | 4.89 | 45700 | 0.4576 | 0.3802 |
| 0.2626 | 4.9 | 45800 | 0.4553 | 0.3780 |
| 0.2626 | 4.91 | 45900 | 0.4336 | 0.3752 |
| 0.2602 | 4.93 | 46000 | 0.4419 | 0.3881 |
| 0.2602 | 4.94 | 46100 | 0.4601 | 0.3843 |
| 0.2602 | 4.95 | 46200 | 0.4437 | 0.3956 |
| 0.2602 | 4.96 | 46300 | 0.4524 | 0.3844 |
| 0.2602 | 4.97 | 46400 | 0.4709 | 0.4031 |
| 0.2609 | 4.98 | 46500 | 0.4500 | 0.3872 |
| 0.2609 | 4.99 | 46600 | 0.4366 | 0.3846 |
| 0.2609 | 5.0 | 46700 | 0.4653 | 0.3884 |
| 0.2609 | 5.01 | 46800 | 0.4602 | 0.3932 |
| 0.2609 | 5.02 | 46900 | 0.4668 | 0.3854 |
| 0.2472 | 5.03 | 47000 | 0.4616 | 0.3891 |
| 0.2472 | 5.04 | 47100 | 0.4543 | 0.3836 |
| 0.2472 | 5.05 | 47200 | 0.4526 | 0.3822 |
| 0.2472 | 5.06 | 47300 | 0.4539 | 0.3741 |
| 0.2472 | 5.07 | 47400 | 0.4776 | 0.3818 |
| 0.2278 | 5.09 | 47500 | 0.4771 | 0.3794 |
| 0.2278 | 5.1 | 47600 | 0.4662 | 0.3831 |
| 0.2278 | 5.11 | 47700 | 0.4558 | 0.4032 |
| 0.2278 | 5.12 | 47800 | 0.4904 | 0.3918 |
| 0.2278 | 5.13 | 47900 | 0.4765 | 0.3890 |
| 0.2311 | 5.14 | 48000 | 0.4674 | 0.3882 |
| 0.2311 | 5.15 | 48100 | 0.4609 | 0.3947 |
| 0.2311 | 5.16 | 48200 | 0.4588 | 0.3837 |
| 0.2311 | 5.17 | 48300 | 0.4827 | 0.3845 |
| 0.2311 | 5.18 | 48400 | 0.4711 | 0.3839 |
| 0.229 | 5.19 | 48500 | 0.4583 | 0.3873 |
| 0.229 | 5.2 | 48600 | 0.4800 | 0.3858 |
| 0.229 | 5.21 | 48700 | 0.4611 | 0.3800 |
| 0.229 | 5.22 | 48800 | 0.4504 | 0.3889 |
| 0.229 | 5.24 | 48900 | 0.4569 | 0.3761 |
| 0.2313 | 5.25 | 49000 | 0.4732 | 0.3915 |
| 0.2313 | 5.26 | 49100 | 0.4728 | 0.3832 |
| 0.2313 | 5.27 | 49200 | 0.4667 | 0.3815 |
| 0.2313 | 5.28 | 49300 | 0.4912 | 0.3856 |
| 0.2313 | 5.29 | 49400 | 0.4790 | 0.3946 |
| 0.2266 | 5.3 | 49500 | 0.4597 | 0.3763 |
| 0.2266 | 5.31 | 49600 | 0.4580 | 0.3778 |
| 0.2266 | 5.32 | 49700 | 0.4439 | 0.3721 |
| 0.2266 | 5.33 | 49800 | 0.4611 | 0.3704 |
| 0.2266 | 5.34 | 49900 | 0.4599 | 0.3769 |
| 0.235 | 5.35 | 50000 | 0.4543 | 0.3808 |
| 0.235 | 5.36 | 50100 | 0.4555 | 0.3773 |
| 0.235 | 5.37 | 50200 | 0.4525 | 0.3815 |
| 0.235 | 5.39 | 50300 | 0.4557 | 0.3814 |
| 0.235 | 5.4 | 50400 | 0.4604 | 0.3754 |
| 0.2299 | 5.41 | 50500 | 0.4658 | 0.3770 |
| 0.2299 | 5.42 | 50600 | 0.4658 | 0.3884 |
| 0.2299 | 5.43 | 50700 | 0.4701 | 0.3919 |
| 0.2299 | 5.44 | 50800 | 0.4495 | 0.3818 |
| 0.2299 | 5.45 | 50900 | 0.4703 | 0.3886 |
| 0.2307 | 5.46 | 51000 | 0.4395 | 0.3743 |
| 0.2307 | 5.47 | 51100 | 0.4487 | 0.3751 |
| 0.2307 | 5.48 | 51200 | 0.4355 | 0.3733 |
| 0.2307 | 5.49 | 51300 | 0.4622 | 0.3811 |
| 0.2307 | 5.5 | 51400 | 0.4443 | 0.3801 |
| 0.2383 | 5.51 | 51500 | 0.4411 | 0.3743 |
| 0.2383 | 5.52 | 51600 | 0.4438 | 0.3778 |
| 0.2383 | 5.54 | 51700 | 0.4559 | 0.3784 |
| 0.2383 | 5.55 | 51800 | 0.4309 | 0.3656 |
| 0.2383 | 5.56 | 51900 | 0.4455 | 0.3660 |
| 0.23 | 5.57 | 52000 | 0.4436 | 0.3598 |
| 0.23 | 5.58 | 52100 | 0.4344 | 0.3685 |
| 0.23 | 5.59 | 52200 | 0.4282 | 0.3690 |
| 0.23 | 5.6 | 52300 | 0.4464 | 0.3800 |
| 0.23 | 5.61 | 52400 | 0.4458 | 0.3909 |
| 0.2305 | 5.62 | 52500 | 0.4483 | 0.3756 |
| 0.2305 | 5.63 | 52600 | 0.4547 | 0.3785 |
| 0.2305 | 5.64 | 52700 | 0.4671 | 0.3820 |
| 0.2305 | 5.65 | 52800 | 0.4449 | 0.3658 |
| 0.2305 | 5.66 | 52900 | 0.4596 | 0.3716 |
| 0.2237 | 5.67 | 53000 | 0.4399 | 0.3669 |
| 0.2237 | 5.69 | 53100 | 0.4410 | 0.3719 |
| 0.2237 | 5.7 | 53200 | 0.4574 | 0.3619 |
| 0.2237 | 5.71 | 53300 | 0.4443 | 0.3690 |
| 0.2237 | 5.72 | 53400 | 0.4381 | 0.3678 |
| 0.2337 | 5.73 | 53500 | 0.4490 | 0.3687 |
| 0.2337 | 5.74 | 53600 | 0.4427 | 0.3752 |
| 0.2337 | 5.75 | 53700 | 0.4423 | 0.3858 |
| 0.2337 | 5.76 | 53800 | 0.4702 | 0.3825 |
| 0.2337 | 5.77 | 53900 | 0.4724 | 0.3800 |
| 0.23 | 5.78 | 54000 | 0.4476 | 0.3827 |
| 0.23 | 5.79 | 54100 | 0.4508 | 0.3919 |
| 0.23 | 5.8 | 54200 | 0.4564 | 0.3788 |
| 0.23 | 5.81 | 54300 | 0.4602 | 0.3888 |
| 0.23 | 5.82 | 54400 | 0.4538 | 0.3732 |
| 0.2334 | 5.84 | 54500 | 0.4500 | 0.3808 |
| 0.2334 | 5.85 | 54600 | 0.4475 | 0.3705 |
| 0.2334 | 5.86 | 54700 | 0.4415 | 0.3772 |
| 0.2334 | 5.87 | 54800 | 0.4515 | 0.3771 |
| 0.2334 | 5.88 | 54900 | 0.4410 | 0.3677 |
| 0.2259 | 5.89 | 55000 | 0.4555 | 0.3702 |
| 0.2259 | 5.9 | 55100 | 0.4509 | 0.3894 |
| 0.2259 | 5.91 | 55200 | 0.4472 | 0.3692 |
| 0.2259 | 5.92 | 55300 | 0.4438 | 0.3754 |
| 0.2259 | 5.93 | 55400 | 0.4399 | 0.3698 |
| 0.2289 | 5.94 | 55500 | 0.4496 | 0.3753 |
| 0.2289 | 5.95 | 55600 | 0.4506 | 0.3752 |
| 0.2289 | 5.96 | 55700 | 0.4482 | 0.3766 |
| 0.2289 | 5.97 | 55800 | 0.4415 | 0.3772 |
| 0.2289 | 5.98 | 55900 | 0.4447 | 0.3750 |
| 0.2281 | 6.0 | 56000 | 0.4566 | 0.3842 |
| 0.2281 | 6.01 | 56100 | 0.4694 | 0.3774 |
| 0.2281 | 6.02 | 56200 | 0.4454 | 0.3788 |
| 0.2281 | 6.03 | 56300 | 0.4676 | 0.3718 |
| 0.2281 | 6.04 | 56400 | 0.4650 | 0.3751 |
| 0.1979 | 6.05 | 56500 | 0.4601 | 0.3765 |
| 0.1979 | 6.06 | 56600 | 0.4647 | 0.3840 |
| 0.1979 | 6.07 | 56700 | 0.4782 | 0.3756 |
| 0.1979 | 6.08 | 56800 | 0.4709 | 0.3736 |
| 0.1979 | 6.09 | 56900 | 0.4707 | 0.3734 |
| 0.1923 | 6.1 | 57000 | 0.4704 | 0.3751 |
| 0.1923 | 6.11 | 57100 | 0.4542 | 0.3721 |
| 0.1923 | 6.12 | 57200 | 0.4542 | 0.3735 |
| 0.1923 | 6.13 | 57300 | 0.4587 | 0.3804 |
| 0.1923 | 6.15 | 57400 | 0.4428 | 0.3687 |
| 0.2012 | 6.16 | 57500 | 0.4456 | 0.3748 |
| 0.2012 | 6.17 | 57600 | 0.4578 | 0.3762 |
| 0.2012 | 6.18 | 57700 | 0.4699 | 0.3722 |
| 0.2012 | 6.19 | 57800 | 0.4499 | 0.3756 |
| 0.2012 | 6.2 | 57900 | 0.4633 | 0.3680 |
| 0.1951 | 6.21 | 58000 | 0.4548 | 0.3712 |
| 0.1951 | 6.22 | 58100 | 0.4520 | 0.3759 |
| 0.1951 | 6.23 | 58200 | 0.4458 | 0.3616 |
| 0.1951 | 6.24 | 58300 | 0.4307 | 0.3637 |
| 0.1951 | 6.25 | 58400 | 0.4546 | 0.3621 |
| 0.1967 | 6.26 | 58500 | 0.4459 | 0.3623 |
| 0.1967 | 6.27 | 58600 | 0.4535 | 0.3690 |
| 0.1967 | 6.28 | 58700 | 0.4574 | 0.3771 |
| 0.1967 | 6.3 | 58800 | 0.4493 | 0.3744 |
| 0.1967 | 6.31 | 58900 | 0.4494 | 0.3769 |
| 0.1998 | 6.32 | 59000 | 0.4529 | 0.3644 |
| 0.1998 | 6.33 | 59100 | 0.4416 | 0.3662 |
| 0.1998 | 6.34 | 59200 | 0.4468 | 0.3785 |
| 0.1998 | 6.35 | 59300 | 0.4377 | 0.3664 |
| 0.1998 | 6.36 | 59400 | 0.4647 | 0.3755 |
| 0.2009 | 6.37 | 59500 | 0.4700 | 0.3824 |
| 0.2009 | 6.38 | 59600 | 0.4488 | 0.3685 |
| 0.2009 | 6.39 | 59700 | 0.4649 | 0.3804 |
| 0.2009 | 6.4 | 59800 | 0.4389 | 0.3689 |
| 0.2009 | 6.41 | 59900 | 0.4456 | 0.3531 |
| 0.2007 | 6.42 | 60000 | 0.4572 | 0.3658 |
| 0.2007 | 6.43 | 60100 | 0.4464 | 0.3669 |
| 0.2007 | 6.45 | 60200 | 0.4666 | 0.3711 |
| 0.2007 | 6.46 | 60300 | 0.4399 | 0.3660 |
| 0.2007 | 6.47 | 60400 | 0.4445 | 0.3631 |
| 0.2005 | 6.48 | 60500 | 0.4450 | 0.3621 |
| 0.2005 | 6.49 | 60600 | 0.4346 | 0.3571 |
| 0.2005 | 6.5 | 60700 | 0.4358 | 0.3581 |
| 0.2005 | 6.51 | 60800 | 0.4344 | 0.3646 |
| 0.2005 | 6.52 | 60900 | 0.4377 | 0.3621 |
| 0.2038 | 6.53 | 61000 | 0.4262 | 0.3570 |
| 0.2038 | 6.54 | 61100 | 0.4269 | 0.3614 |
| 0.2038 | 6.55 | 61200 | 0.4297 | 0.3592 |
| 0.2038 | 6.56 | 61300 | 0.4433 | 0.3682 |
| 0.2038 | 6.57 | 61400 | 0.4474 | 0.3644 |
| 0.199 | 6.58 | 61500 | 0.4464 | 0.3678 |
| 0.199 | 6.6 | 61600 | 0.4397 | 0.3562 |
| 0.199 | 6.61 | 61700 | 0.4415 | 0.3612 |
| 0.199 | 6.62 | 61800 | 0.4362 | 0.3601 |
| 0.199 | 6.63 | 61900 | 0.4442 | 0.3623 |
| 0.1995 | 6.64 | 62000 | 0.4558 | 0.3662 |
| 0.1995 | 6.65 | 62100 | 0.4477 | 0.3647 |
| 0.1995 | 6.66 | 62200 | 0.4542 | 0.3699 |
| 0.1995 | 6.67 | 62300 | 0.4411 | 0.3632 |
| 0.1995 | 6.68 | 62400 | 0.4408 | 0.3658 |
| 0.2014 | 6.69 | 62500 | 0.4426 | 0.3691 |
| 0.2014 | 6.7 | 62600 | 0.4246 | 0.3645 |
| 0.2014 | 6.71 | 62700 | 0.4466 | 0.3676 |
| 0.2014 | 6.72 | 62800 | 0.4493 | 0.3566 |
| 0.2014 | 6.73 | 62900 | 0.4336 | 0.3621 |
| 0.2015 | 6.75 | 63000 | 0.4367 | 0.3604 |
| 0.2015 | 6.76 | 63100 | 0.4424 | 0.3754 |
| 0.2015 | 6.77 | 63200 | 0.4679 | 0.3733 |
| 0.2015 | 6.78 | 63300 | 0.4483 | 0.3752 |
| 0.2015 | 6.79 | 63400 | 0.4746 | 0.3822 |
| 0.2048 | 6.8 | 63500 | 0.4340 | 0.3731 |
| 0.2048 | 6.81 | 63600 | 0.4346 | 0.3631 |
| 0.2048 | 6.82 | 63700 | 0.4525 | 0.3680 |
| 0.2048 | 6.83 | 63800 | 0.4360 | 0.3641 |
| 0.2048 | 6.84 | 63900 | 0.4299 | 0.3558 |
| 0.2017 | 6.85 | 64000 | 0.4370 | 0.3533 |
| 0.2017 | 6.86 | 64100 | 0.4293 | 0.3617 |
| 0.2017 | 6.87 | 64200 | 0.4431 | 0.3660 |
| 0.2017 | 6.88 | 64300 | 0.4362 | 0.3688 |
| 0.2017 | 6.9 | 64400 | 0.4507 | 0.3648 |
| 0.2045 | 6.91 | 64500 | 0.4439 | 0.3613 |
| 0.2045 | 6.92 | 64600 | 0.4249 | 0.3493 |
| 0.2045 | 6.93 | 64700 | 0.4362 | 0.3612 |
| 0.2045 | 6.94 | 64800 | 0.4336 | 0.3585 |
| 0.2045 | 6.95 | 64900 | 0.4387 | 0.3568 |
| 0.1977 | 6.96 | 65000 | 0.4313 | 0.3542 |
| 0.1977 | 6.97 | 65100 | 0.4287 | 0.3552 |
| 0.1977 | 6.98 | 65200 | 0.4372 | 0.3586 |
| 0.1977 | 6.99 | 65300 | 0.4378 | 0.3629 |
| 0.1977 | 7.0 | 65400 | 0.4518 | 0.3640 |
| 0.1971 | 7.01 | 65500 | 0.4480 | 0.3557 |
| 0.1971 | 7.02 | 65600 | 0.4530 | 0.3560 |
| 0.1971 | 7.03 | 65700 | 0.4581 | 0.3582 |
| 0.1971 | 7.04 | 65800 | 0.4492 | 0.3543 |
| 0.1971 | 7.06 | 65900 | 0.4448 | 0.3608 |
| 0.1672 | 7.07 | 66000 | 0.4469 | 0.3543 |
| 0.1672 | 7.08 | 66100 | 0.4262 | 0.3488 |
| 0.1672 | 7.09 | 66200 | 0.4289 | 0.3570 |
| 0.1672 | 7.1 | 66300 | 0.4455 | 0.3545 |
| 0.1672 | 7.11 | 66400 | 0.4449 | 0.3563 |
| 0.169 | 7.12 | 66500 | 0.4555 | 0.3565 |
| 0.169 | 7.13 | 66600 | 0.4432 | 0.3656 |
| 0.169 | 7.14 | 66700 | 0.4399 | 0.3610 |
| 0.169 | 7.15 | 66800 | 0.4383 | 0.3554 |
| 0.169 | 7.16 | 66900 | 0.4376 | 0.3536 |
| 0.1724 | 7.17 | 67000 | 0.4383 | 0.3572 |
| 0.1724 | 7.18 | 67100 | 0.4452 | 0.3535 |
| 0.1724 | 7.19 | 67200 | 0.4610 | 0.3668 |
| 0.1724 | 7.21 | 67300 | 0.4534 | 0.3546 |
| 0.1724 | 7.22 | 67400 | 0.4506 | 0.3604 |
| 0.1729 | 7.23 | 67500 | 0.4463 | 0.3507 |
| 0.1729 | 7.24 | 67600 | 0.4440 | 0.3630 |
| 0.1729 | 7.25 | 67700 | 0.4361 | 0.3550 |
| 0.1729 | 7.26 | 67800 | 0.4397 | 0.3643 |
| 0.1729 | 7.27 | 67900 | 0.4328 | 0.3548 |
| 0.1736 | 7.28 | 68000 | 0.4546 | 0.3614 |
| 0.1736 | 7.29 | 68100 | 0.4506 | 0.3558 |
| 0.1736 | 7.3 | 68200 | 0.4361 | 0.3513 |
| 0.1736 | 7.31 | 68300 | 0.4223 | 0.3500 |
| 0.1736 | 7.32 | 68400 | 0.4474 | 0.3497 |
| 0.1733 | 7.33 | 68500 | 0.4303 | 0.3549 |
| 0.1733 | 7.34 | 68600 | 0.4265 | 0.3483 |
| 0.1733 | 7.36 | 68700 | 0.4339 | 0.3558 |
| 0.1733 | 7.37 | 68800 | 0.4266 | 0.3491 |
| 0.1733 | 7.38 | 68900 | 0.4423 | 0.3565 |
| 0.1764 | 7.39 | 69000 | 0.4410 | 0.3554 |
| 0.1764 | 7.4 | 69100 | 0.4482 | 0.3703 |
| 0.1764 | 7.41 | 69200 | 0.4480 | 0.3641 |
| 0.1764 | 7.42 | 69300 | 0.4361 | 0.3500 |
| 0.1764 | 7.43 | 69400 | 0.4399 | 0.3632 |
| 0.1711 | 7.44 | 69500 | 0.4383 | 0.3591 |
| 0.1711 | 7.45 | 69600 | 0.4523 | 0.3636 |
| 0.1711 | 7.46 | 69700 | 0.4388 | 0.3502 |
| 0.1711 | 7.47 | 69800 | 0.4305 | 0.3565 |
| 0.1711 | 7.48 | 69900 | 0.4290 | 0.3538 |
| 0.1748 | 7.49 | 70000 | 0.4359 | 0.3511 |
| 0.1748 | 7.51 | 70100 | 0.4315 | 0.3460 |
| 0.1748 | 7.52 | 70200 | 0.4268 | 0.3555 |
| 0.1748 | 7.53 | 70300 | 0.4267 | 0.3455 |
| 0.1748 | 7.54 | 70400 | 0.4359 | 0.3517 |
| 0.1739 | 7.55 | 70500 | 0.4299 | 0.3491 |
| 0.1739 | 7.56 | 70600 | 0.4423 | 0.3409 |
| 0.1739 | 7.57 | 70700 | 0.4251 | 0.3420 |
| 0.1739 | 7.58 | 70800 | 0.4300 | 0.3414 |
| 0.1739 | 7.59 | 70900 | 0.4349 | 0.3422 |
| 0.1763 | 7.6 | 71000 | 0.4328 | 0.3418 |
| 0.1763 | 7.61 | 71100 | 0.4313 | 0.3452 |
| 0.1763 | 7.62 | 71200 | 0.4240 | 0.3534 |
| 0.1763 | 7.63 | 71300 | 0.4274 | 0.3474 |
| 0.1763 | 7.64 | 71400 | 0.4304 | 0.3467 |
| 0.171 | 7.66 | 71500 | 0.4331 | 0.3510 |
| 0.171 | 7.67 | 71600 | 0.4263 | 0.3478 |
| 0.171 | 7.68 | 71700 | 0.4301 | 0.3447 |
| 0.171 | 7.69 | 71800 | 0.4046 | 0.3452 |
| 0.171 | 7.7 | 71900 | 0.4300 | 0.3528 |
| 0.1792 | 7.71 | 72000 | 0.4253 | 0.3492 |
| 0.1792 | 7.72 | 72100 | 0.4296 | 0.3491 |
| 0.1792 | 7.73 | 72200 | 0.4118 | 0.3451 |
| 0.1792 | 7.74 | 72300 | 0.4348 | 0.3345 |
| 0.1792 | 7.75 | 72400 | 0.4283 | 0.3447 |
| 0.1801 | 7.76 | 72500 | 0.4232 | 0.3449 |
| 0.1801 | 7.77 | 72600 | 0.4491 | 0.3486 |
| 0.1801 | 7.78 | 72700 | 0.4261 | 0.3343 |
| 0.1801 | 7.79 | 72800 | 0.4382 | 0.3455 |
| 0.1801 | 7.81 | 72900 | 0.4301 | 0.3415 |
| 0.1731 | 7.82 | 73000 | 0.4236 | 0.3438 |
| 0.1731 | 7.83 | 73100 | 0.4257 | 0.3419 |
| 0.1731 | 7.84 | 73200 | 0.4368 | 0.3410 |
| 0.1731 | 7.85 | 73300 | 0.4207 | 0.3398 |
| 0.1731 | 7.86 | 73400 | 0.4118 | 0.3418 |
| 0.1748 | 7.87 | 73500 | 0.4357 | 0.3429 |
| 0.1748 | 7.88 | 73600 | 0.4277 | 0.3452 |
| 0.1748 | 7.89 | 73700 | 0.4173 | 0.3476 |
| 0.1748 | 7.9 | 73800 | 0.4191 | 0.3478 |
| 0.1748 | 7.91 | 73900 | 0.4197 | 0.3457 |
| 0.1745 | 7.92 | 74000 | 0.4197 | 0.3436 |
| 0.1745 | 7.93 | 74100 | 0.4253 | 0.3512 |
| 0.1745 | 7.94 | 74200 | 0.4217 | 0.3463 |
| 0.1745 | 7.95 | 74300 | 0.4305 | 0.3473 |
| 0.1745 | 7.97 | 74400 | 0.4215 | 0.3507 |
| 0.1743 | 7.98 | 74500 | 0.4127 | 0.3408 |
| 0.1743 | 7.99 | 74600 | 0.4191 | 0.3468 |
| 0.1743 | 8.0 | 74700 | 0.4381 | 0.3491 |
| 0.1743 | 8.01 | 74800 | 0.4510 | 0.3477 |
| 0.1743 | 8.02 | 74900 | 0.4482 | 0.3471 |
| 0.1588 | 8.03 | 75000 | 0.4471 | 0.3430 |
| 0.1588 | 8.04 | 75100 | 0.4296 | 0.3393 |
| 0.1588 | 8.05 | 75200 | 0.4480 | 0.3398 |
| 0.1588 | 8.06 | 75300 | 0.4302 | 0.3452 |
| 0.1588 | 8.07 | 75400 | 0.4410 | 0.3431 |
| 0.144 | 8.08 | 75500 | 0.4263 | 0.3455 |
| 0.144 | 8.09 | 75600 | 0.4523 | 0.3495 |
| 0.144 | 8.1 | 75700 | 0.4455 | 0.3511 |
| 0.144 | 8.12 | 75800 | 0.4379 | 0.3445 |
| 0.144 | 8.13 | 75900 | 0.4418 | 0.3411 |
| 0.1483 | 8.14 | 76000 | 0.4491 | 0.3463 |
| 0.1483 | 8.15 | 76100 | 0.4386 | 0.3467 |
| 0.1483 | 8.16 | 76200 | 0.4327 | 0.3524 |
| 0.1483 | 8.17 | 76300 | 0.4360 | 0.3613 |
| 0.1483 | 8.18 | 76400 | 0.4352 | 0.3498 |
| 0.1541 | 8.19 | 76500 | 0.4376 | 0.3414 |
| 0.1541 | 8.2 | 76600 | 0.4408 | 0.3464 |
| 0.1541 | 8.21 | 76700 | 0.4415 | 0.3445 |
| 0.1541 | 8.22 | 76800 | 0.4455 | 0.3482 |
| 0.1541 | 8.23 | 76900 | 0.4542 | 0.3415 |
| 0.1479 | 8.24 | 77000 | 0.4462 | 0.3426 |
| 0.1479 | 8.25 | 77100 | 0.4460 | 0.3413 |
| 0.1479 | 8.27 | 77200 | 0.4434 | 0.3375 |
| 0.1479 | 8.28 | 77300 | 0.4397 | 0.3473 |
| 0.1479 | 8.29 | 77400 | 0.4379 | 0.3484 |
| 0.1479 | 8.3 | 77500 | 0.4441 | 0.3494 |
| 0.1479 | 8.31 | 77600 | 0.4301 | 0.3466 |
| 0.1479 | 8.32 | 77700 | 0.4420 | 0.3474 |
| 0.1479 | 8.33 | 77800 | 0.4520 | 0.3589 |
| 0.1479 | 8.34 | 77900 | 0.4283 | 0.3482 |
| 0.1531 | 8.35 | 78000 | 0.4325 | 0.3446 |
| 0.1531 | 8.36 | 78100 | 0.4380 | 0.3469 |
| 0.1531 | 8.37 | 78200 | 0.4463 | 0.3503 |
| 0.1531 | 8.38 | 78300 | 0.4479 | 0.3499 |
| 0.1531 | 8.39 | 78400 | 0.4477 | 0.3529 |
| 0.1507 | 8.4 | 78500 | 0.4709 | 0.3551 |
| 0.1507 | 8.42 | 78600 | 0.4533 | 0.3531 |
| 0.1507 | 8.43 | 78700 | 0.4507 | 0.3522 |
| 0.1507 | 8.44 | 78800 | 0.4562 | 0.3583 |
| 0.1507 | 8.45 | 78900 | 0.4421 | 0.3577 |
| 0.1545 | 8.46 | 79000 | 0.4485 | 0.3547 |
| 0.1545 | 8.47 | 79100 | 0.4389 | 0.3465 |
| 0.1545 | 8.48 | 79200 | 0.4397 | 0.3502 |
| 0.1545 | 8.49 | 79300 | 0.4403 | 0.3471 |
| 0.1545 | 8.5 | 79400 | 0.4394 | 0.3482 |
| 0.153 | 8.51 | 79500 | 0.4393 | 0.3474 |
| 0.153 | 8.52 | 79600 | 0.4343 | 0.3495 |
| 0.153 | 8.53 | 79700 | 0.4395 | 0.3539 |
| 0.153 | 8.54 | 79800 | 0.4497 | 0.3535 |
| 0.153 | 8.55 | 79900 | 0.4443 | 0.3540 |
| 0.1558 | 8.57 | 80000 | 0.4495 | 0.3554 |
| 0.1558 | 8.58 | 80100 | 0.4387 | 0.3460 |
| 0.1558 | 8.59 | 80200 | 0.4378 | 0.3520 |
| 0.1558 | 8.6 | 80300 | 0.4446 | 0.3527 |
| 0.1558 | 8.61 | 80400 | 0.4513 | 0.3508 |
| 0.1527 | 8.62 | 80500 | 0.4396 | 0.3537 |
| 0.1527 | 8.63 | 80600 | 0.4405 | 0.3507 |
| 0.1527 | 8.64 | 80700 | 0.4398 | 0.3450 |
| 0.1527 | 8.65 | 80800 | 0.4458 | 0.3508 |
| 0.1527 | 8.66 | 80900 | 0.4380 | 0.3465 |
| 0.1522 | 8.67 | 81000 | 0.4373 | 0.3482 |
| 0.1522 | 8.68 | 81100 | 0.4363 | 0.3410 |
| 0.1522 | 8.69 | 81200 | 0.4290 | 0.3447 |
| 0.1522 | 8.7 | 81300 | 0.4409 | 0.3515 |
| 0.1522 | 8.72 | 81400 | 0.4363 | 0.3433 |
| 0.1502 | 8.73 | 81500 | 0.4313 | 0.3429 |
| 0.1502 | 8.74 | 81600 | 0.4263 | 0.3451 |
| 0.1502 | 8.75 | 81700 | 0.4297 | 0.3452 |
| 0.1502 | 8.76 | 81800 | 0.4449 | 0.3411 |
| 0.1502 | 8.77 | 81900 | 0.4465 | 0.3455 |
| 0.151 | 8.78 | 82000 | 0.4274 | 0.3425 |
| 0.151 | 8.79 | 82100 | 0.4525 | 0.3532 |
| 0.151 | 8.8 | 82200 | 0.4282 | 0.3502 |
| 0.151 | 8.81 | 82300 | 0.4189 | 0.3507 |
| 0.151 | 8.82 | 82400 | 0.4379 | 0.3451 |
| 0.1529 | 8.83 | 82500 | 0.4378 | 0.3419 |
| 0.1529 | 8.84 | 82600 | 0.4283 | 0.3392 |
| 0.1529 | 8.85 | 82700 | 0.4359 | 0.3399 |
| 0.1529 | 8.87 | 82800 | 0.4308 | 0.3358 |
| 0.1529 | 8.88 | 82900 | 0.4296 | 0.3335 |
| 0.151 | 8.89 | 83000 | 0.4387 | 0.3372 |
| 0.151 | 8.9 | 83100 | 0.4335 | 0.3420 |
| 0.151 | 8.91 | 83200 | 0.4329 | 0.3374 |
| 0.151 | 8.92 | 83300 | 0.4353 | 0.3404 |
| 0.151 | 8.93 | 83400 | 0.4384 | 0.3447 |
| 0.1522 | 8.94 | 83500 | 0.4444 | 0.3353 |
| 0.1522 | 8.95 | 83600 | 0.4413 | 0.3481 |
| 0.1522 | 8.96 | 83700 | 0.4247 | 0.3474 |
| 0.1522 | 8.97 | 83800 | 0.4197 | 0.3386 |
| 0.1522 | 8.98 | 83900 | 0.4216 | 0.3384 |
| 0.1511 | 8.99 | 84000 | 0.4159 | 0.3396 |
| 0.1511 | 9.0 | 84100 | 0.4213 | 0.3416 |
| 0.1511 | 9.01 | 84200 | 0.4399 | 0.3379 |
| 0.1511 | 9.03 | 84300 | 0.4318 | 0.3437 |
| 0.1511 | 9.04 | 84400 | 0.4356 | 0.3371 |
| 0.1336 | 9.05 | 84500 | 0.4403 | 0.3373 |
| 0.1336 | 9.06 | 84600 | 0.4545 | 0.3381 |
| 0.1336 | 9.07 | 84700 | 0.4313 | 0.3331 |
| 0.1336 | 9.08 | 84800 | 0.4257 | 0.3360 |
| 0.1336 | 9.09 | 84900 | 0.4285 | 0.3372 |
| 0.1315 | 9.1 | 85000 | 0.4378 | 0.3332 |
| 0.1315 | 9.11 | 85100 | 0.4352 | 0.3282 |
| 0.1315 | 9.12 | 85200 | 0.4360 | 0.3339 |
| 0.1315 | 9.13 | 85300 | 0.4404 | 0.3365 |
| 0.1315 | 9.14 | 85400 | 0.4345 | 0.3356 |
| 0.1272 | 9.15 | 85500 | 0.4468 | 0.3375 |
| 0.1272 | 9.16 | 85600 | 0.4331 | 0.3363 |
| 0.1272 | 9.18 | 85700 | 0.4330 | 0.3309 |
| 0.1272 | 9.19 | 85800 | 0.4424 | 0.3301 |
| 0.1272 | 9.2 | 85900 | 0.4520 | 0.3326 |
| 0.1289 | 9.21 | 86000 | 0.4421 | 0.3326 |
| 0.1289 | 9.22 | 86100 | 0.4480 | 0.3335 |
| 0.1289 | 9.23 | 86200 | 0.4351 | 0.3380 |
| 0.1289 | 9.24 | 86300 | 0.4350 | 0.3427 |
| 0.1289 | 9.25 | 86400 | 0.4362 | 0.3320 |
| 0.1333 | 9.26 | 86500 | 0.4260 | 0.3342 |
| 0.1333 | 9.27 | 86600 | 0.4357 | 0.3360 |
| 0.1333 | 9.28 | 86700 | 0.4505 | 0.3372 |
| 0.1333 | 9.29 | 86800 | 0.4342 | 0.3359 |
| 0.1333 | 9.3 | 86900 | 0.4295 | 0.3367 |
| 0.1318 | 9.31 | 87000 | 0.4320 | 0.3335 |
| 0.1318 | 9.33 | 87100 | 0.4332 | 0.3344 |
| 0.1318 | 9.34 | 87200 | 0.4373 | 0.3330 |
| 0.1318 | 9.35 | 87300 | 0.4490 | 0.3316 |
| 0.1318 | 9.36 | 87400 | 0.4188 | 0.3429 |
| 0.1275 | 9.37 | 87500 | 0.4502 | 0.3383 |
| 0.1275 | 9.38 | 87600 | 0.4463 | 0.3387 |
| 0.1275 | 9.39 | 87700 | 0.4385 | 0.3308 |
| 0.1275 | 9.4 | 87800 | 0.4464 | 0.3414 |
| 0.1275 | 9.41 | 87900 | 0.4563 | 0.3405 |
| 0.1331 | 9.42 | 88000 | 0.4286 | 0.3374 |
| 0.1331 | 9.43 | 88100 | 0.4389 | 0.3352 |
| 0.1331 | 9.44 | 88200 | 0.4301 | 0.3340 |
| 0.1331 | 9.45 | 88300 | 0.4417 | 0.3373 |
| 0.1331 | 9.46 | 88400 | 0.4450 | 0.3425 |
| 0.1266 | 9.48 | 88500 | 0.4456 | 0.3451 |
| 0.1266 | 9.49 | 88600 | 0.4517 | 0.3403 |
| 0.1266 | 9.5 | 88700 | 0.4447 | 0.3419 |
| 0.1266 | 9.51 | 88800 | 0.4486 | 0.3428 |
| 0.1266 | 9.52 | 88900 | 0.4591 | 0.3411 |
| 0.1316 | 9.53 | 89000 | 0.4481 | 0.3387 |
| 0.1316 | 9.54 | 89100 | 0.4308 | 0.3349 |
| 0.1316 | 9.55 | 89200 | 0.4411 | 0.3405 |
| 0.1316 | 9.56 | 89300 | 0.4378 | 0.3390 |
| 0.1316 | 9.57 | 89400 | 0.4448 | 0.3365 |
| 0.1325 | 9.58 | 89500 | 0.4575 | 0.3416 |
| 0.1325 | 9.59 | 89600 | 0.4608 | 0.3422 |
| 0.1325 | 9.6 | 89700 | 0.4396 | 0.3350 |
| 0.1325 | 9.61 | 89800 | 0.4380 | 0.3398 |
| 0.1325 | 9.63 | 89900 | 0.4337 | 0.3388 |
| 0.1324 | 9.64 | 90000 | 0.4376 | 0.3388 |
| 0.1324 | 9.65 | 90100 | 0.4185 | 0.3380 |
| 0.1324 | 9.66 | 90200 | 0.4394 | 0.3384 |
| 0.1324 | 9.67 | 90300 | 0.4472 | 0.3400 |
| 0.1324 | 9.68 | 90400 | 0.4523 | 0.3390 |
| 0.1361 | 9.69 | 90500 | 0.4466 | 0.3389 |
| 0.1361 | 9.7 | 90600 | 0.4414 | 0.3383 |
| 0.1361 | 9.71 | 90700 | 0.4288 | 0.3348 |
| 0.1361 | 9.72 | 90800 | 0.4445 | 0.3374 |
| 0.1361 | 9.73 | 90900 | 0.4252 | 0.3322 |
| 0.1353 | 9.74 | 91000 | 0.4312 | 0.3338 |
| 0.1353 | 9.75 | 91100 | 0.4326 | 0.3319 |
| 0.1353 | 9.76 | 91200 | 0.4212 | 0.3400 |
| 0.1353 | 9.78 | 91300 | 0.4191 | 0.3374 |
| 0.1353 | 9.79 | 91400 | 0.4399 | 0.3332 |
| 0.1308 | 9.8 | 91500 | 0.4340 | 0.3349 |
| 0.1308 | 9.81 | 91600 | 0.4280 | 0.3379 |
| 0.1308 | 9.82 | 91700 | 0.4419 | 0.3376 |
| 0.1308 | 9.83 | 91800 | 0.4309 | 0.3333 |
| 0.1308 | 9.84 | 91900 | 0.4274 | 0.3352 |
| 0.1321 | 9.85 | 92000 | 0.4147 | 0.3337 |
| 0.1321 | 9.86 | 92100 | 0.4252 | 0.3316 |
| 0.1321 | 9.87 | 92200 | 0.4378 | 0.3381 |
| 0.1321 | 9.88 | 92300 | 0.4265 | 0.3355 |
| 0.1321 | 9.89 | 92400 | 0.4247 | 0.3331 |
| 0.1358 | 9.9 | 92500 | 0.4099 | 0.3379 |
| 0.1358 | 9.91 | 92600 | 0.4142 | 0.3356 |
| 0.1358 | 9.93 | 92700 | 0.4220 | 0.3332 |
| 0.1358 | 9.94 | 92800 | 0.4219 | 0.3369 |
| 0.1358 | 9.95 | 92900 | 0.4178 | 0.3332 |
| 0.1331 | 9.96 | 93000 | 0.4305 | 0.3353 |
| 0.1331 | 9.97 | 93100 | 0.4324 | 0.3307 |
| 0.1331 | 9.98 | 93200 | 0.4315 | 0.3344 |
| 0.1331 | 9.99 | 93300 | 0.4212 | 0.3314 |
| 0.1331 | 10.0 | 93400 | 0.4203 | 0.3332 |
| 0.1304 | 10.01 | 93500 | 0.4424 | 0.3351 |
| 0.1304 | 10.02 | 93600 | 0.4474 | 0.3341 |
| 0.1304 | 10.03 | 93700 | 0.4466 | 0.3378 |
| 0.1304 | 10.04 | 93800 | 0.4388 | 0.3327 |
| 0.1304 | 10.05 | 93900 | 0.4312 | 0.3360 |
| 0.1152 | 10.06 | 94000 | 0.4471 | 0.3307 |
| 0.1152 | 10.07 | 94100 | 0.4472 | 0.3316 |
| 0.1152 | 10.09 | 94200 | 0.4462 | 0.3324 |
| 0.1152 | 10.1 | 94300 | 0.4383 | 0.3344 |
| 0.1152 | 10.11 | 94400 | 0.4671 | 0.3365 |
| 0.1097 | 10.12 | 94500 | 0.4596 | 0.3307 |
| 0.1097 | 10.13 | 94600 | 0.4517 | 0.3382 |
| 0.1097 | 10.14 | 94700 | 0.4285 | 0.3380 |
| 0.1097 | 10.15 | 94800 | 0.4628 | 0.3363 |
| 0.1097 | 10.16 | 94900 | 0.4478 | 0.3365 |
| 0.1153 | 10.17 | 95000 | 0.4464 | 0.3346 |
| 0.1153 | 10.18 | 95100 | 0.4432 | 0.3392 |
| 0.1153 | 10.19 | 95200 | 0.4326 | 0.3330 |
| 0.1153 | 10.2 | 95300 | 0.4480 | 0.3327 |
| 0.1153 | 10.21 | 95400 | 0.4436 | 0.3260 |
| 0.1149 | 10.22 | 95500 | 0.4549 | 0.3311 |
| 0.1149 | 10.24 | 95600 | 0.4573 | 0.3353 |
| 0.1149 | 10.25 | 95700 | 0.4373 | 0.3369 |
| 0.1149 | 10.26 | 95800 | 0.4459 | 0.3358 |
| 0.1149 | 10.27 | 95900 | 0.4288 | 0.3270 |
| 0.1169 | 10.28 | 96000 | 0.4474 | 0.3330 |
| 0.1169 | 10.29 | 96100 | 0.4524 | 0.3298 |
| 0.1169 | 10.3 | 96200 | 0.4517 | 0.3258 |
| 0.1169 | 10.31 | 96300 | 0.4366 | 0.3288 |
| 0.1169 | 10.32 | 96400 | 0.4574 | 0.3324 |
| 0.1137 | 10.33 | 96500 | 0.4507 | 0.3343 |
| 0.1137 | 10.34 | 96600 | 0.4414 | 0.3301 |
| 0.1137 | 10.35 | 96700 | 0.4524 | 0.3366 |
| 0.1137 | 10.36 | 96800 | 0.4563 | 0.3435 |
| 0.1137 | 10.37 | 96900 | 0.4315 | 0.3375 |
| 0.1162 | 10.39 | 97000 | 0.4429 | 0.3365 |
| 0.1162 | 10.4 | 97100 | 0.4489 | 0.3380 |
| 0.1162 | 10.41 | 97200 | 0.4352 | 0.3357 |
| 0.1162 | 10.42 | 97300 | 0.4390 | 0.3319 |
| 0.1162 | 10.43 | 97400 | 0.4570 | 0.3303 |
| 0.1151 | 10.44 | 97500 | 0.4692 | 0.3344 |
| 0.1151 | 10.45 | 97600 | 0.4605 | 0.3332 |
| 0.1151 | 10.46 | 97700 | 0.4457 | 0.3238 |
| 0.1151 | 10.47 | 97800 | 0.4298 | 0.3304 |
| 0.1151 | 10.48 | 97900 | 0.4619 | 0.3274 |
| 0.1105 | 10.49 | 98000 | 0.4362 | 0.3244 |
| 0.1105 | 10.5 | 98100 | 0.4568 | 0.3289 |
| 0.1105 | 10.51 | 98200 | 0.4522 | 0.3336 |
| 0.1105 | 10.52 | 98300 | 0.4302 | 0.3257 |
| 0.1105 | 10.54 | 98400 | 0.4505 | 0.3238 |
| 0.1164 | 10.55 | 98500 | 0.4430 | 0.3301 |
| 0.1164 | 10.56 | 98600 | 0.4575 | 0.3283 |
| 0.1164 | 10.57 | 98700 | 0.4447 | 0.3277 |
| 0.1164 | 10.58 | 98800 | 0.4400 | 0.3301 |
| 0.1164 | 10.59 | 98900 | 0.4427 | 0.3288 |
| 0.1113 | 10.6 | 99000 | 0.4538 | 0.3248 |
| 0.1113 | 10.61 | 99100 | 0.4519 | 0.3298 |
| 0.1113 | 10.62 | 99200 | 0.4290 | 0.3249 |
| 0.1113 | 10.63 | 99300 | 0.4501 | 0.3220 |
| 0.1113 | 10.64 | 99400 | 0.4410 | 0.3218 |
| 0.1159 | 10.65 | 99500 | 0.4478 | 0.3211 |
| 0.1159 | 10.66 | 99600 | 0.4462 | 0.3250 |
| 0.1159 | 10.67 | 99700 | 0.4543 | 0.3302 |
| 0.1159 | 10.69 | 99800 | 0.4462 | 0.3301 |
| 0.1159 | 10.7 | 99900 | 0.4468 | 0.3229 |
| 0.1161 | 10.71 | 100000 | 0.4515 | 0.3241 |
| 0.1161 | 10.72 | 100100 | 0.4404 | 0.3276 |
| 0.1161 | 10.73 | 100200 | 0.4439 | 0.3222 |
| 0.1161 | 10.74 | 100300 | 0.4392 | 0.3257 |
| 0.1161 | 10.75 | 100400 | 0.4476 | 0.3314 |
| 0.1199 | 10.76 | 100500 | 0.4493 | 0.3270 |
| 0.1199 | 10.77 | 100600 | 0.4462 | 0.3224 |
| 0.1199 | 10.78 | 100700 | 0.4467 | 0.3311 |
| 0.1199 | 10.79 | 100800 | 0.4198 | 0.3228 |
| 0.1199 | 10.8 | 100900 | 0.4349 | 0.3225 |
| 0.1146 | 10.81 | 101000 | 0.4371 | 0.3272 |
| 0.1146 | 10.82 | 101100 | 0.4525 | 0.3210 |
| 0.1146 | 10.84 | 101200 | 0.4293 | 0.3219 |
| 0.1146 | 10.85 | 101300 | 0.4238 | 0.3216 |
| 0.1146 | 10.86 | 101400 | 0.4377 | 0.3252 |
| 0.118 | 10.87 | 101500 | 0.4371 | 0.3208 |
| 0.118 | 10.88 | 101600 | 0.4216 | 0.3174 |
| 0.118 | 10.89 | 101700 | 0.4312 | 0.3189 |
| 0.118 | 10.9 | 101800 | 0.4317 | 0.3204 |
| 0.118 | 10.91 | 101900 | 0.4303 | 0.3235 |
| 0.114 | 10.92 | 102000 | 0.4416 | 0.3158 |
| 0.114 | 10.93 | 102100 | 0.4240 | 0.3195 |
| 0.114 | 10.94 | 102200 | 0.4340 | 0.3149 |
| 0.114 | 10.95 | 102300 | 0.4311 | 0.3215 |
| 0.114 | 10.96 | 102400 | 0.4261 | 0.3238 |
| 0.1152 | 10.97 | 102500 | 0.4263 | 0.3206 |
| 0.1152 | 10.98 | 102600 | 0.4325 | 0.3294 |
| 0.1152 | 11.0 | 102700 | 0.4327 | 0.3187 |
| 0.1152 | 11.01 | 102800 | 0.4423 | 0.3195 |
| 0.1152 | 11.02 | 102900 | 0.4341 | 0.3277 |
| 0.1084 | 11.03 | 103000 | 0.4232 | 0.3243 |
| 0.1084 | 11.04 | 103100 | 0.4355 | 0.3184 |
| 0.1084 | 11.05 | 103200 | 0.4374 | 0.3274 |
| 0.1084 | 11.06 | 103300 | 0.4484 | 0.3305 |
| 0.1084 | 11.07 | 103400 | 0.4423 | 0.3226 |
| 0.1003 | 11.08 | 103500 | 0.4518 | 0.3224 |
| 0.1003 | 11.09 | 103600 | 0.4518 | 0.3243 |
| 0.1003 | 11.1 | 103700 | 0.4282 | 0.3207 |
| 0.1003 | 11.11 | 103800 | 0.4418 | 0.3220 |
| 0.1003 | 11.12 | 103900 | 0.4411 | 0.3216 |
| 0.1009 | 11.13 | 104000 | 0.4474 | 0.3238 |
| 0.1009 | 11.15 | 104100 | 0.4406 | 0.3245 |
| 0.1009 | 11.16 | 104200 | 0.4384 | 0.3242 |
| 0.1009 | 11.17 | 104300 | 0.4702 | 0.3265 |
| 0.1009 | 11.18 | 104400 | 0.4611 | 0.3266 |
| 0.0992 | 11.19 | 104500 | 0.4425 | 0.3211 |
| 0.0992 | 11.2 | 104600 | 0.4575 | 0.3222 |
| 0.0992 | 11.21 | 104700 | 0.4449 | 0.3208 |
| 0.0992 | 11.22 | 104800 | 0.4715 | 0.3208 |
| 0.0992 | 11.23 | 104900 | 0.4469 | 0.3223 |
| 0.1021 | 11.24 | 105000 | 0.4536 | 0.3225 |
| 0.1021 | 11.25 | 105100 | 0.4629 | 0.3234 |
| 0.1021 | 11.26 | 105200 | 0.4550 | 0.3205 |
| 0.1021 | 11.27 | 105300 | 0.4598 | 0.3213 |
| 0.1021 | 11.28 | 105400 | 0.4522 | 0.3179 |
| 0.1021 | 11.3 | 105500 | 0.4658 | 0.3211 |
| 0.1021 | 11.31 | 105600 | 0.4664 | 0.3196 |
| 0.1021 | 11.32 | 105700 | 0.4736 | 0.3177 |
| 0.1021 | 11.33 | 105800 | 0.4587 | 0.3158 |
| 0.1021 | 11.34 | 105900 | 0.4589 | 0.3194 |
| 0.1025 | 11.35 | 106000 | 0.4692 | 0.3214 |
| 0.1025 | 11.36 | 106100 | 0.4382 | 0.3181 |
| 0.1025 | 11.37 | 106200 | 0.4556 | 0.3185 |
| 0.1025 | 11.38 | 106300 | 0.4445 | 0.3191 |
| 0.1025 | 11.39 | 106400 | 0.4379 | 0.3163 |
| 0.104 | 11.4 | 106500 | 0.4454 | 0.3220 |
| 0.104 | 11.41 | 106600 | 0.4463 | 0.3201 |
| 0.104 | 11.42 | 106700 | 0.4550 | 0.3173 |
| 0.104 | 11.43 | 106800 | 0.4404 | 0.3168 |
| 0.104 | 11.45 | 106900 | 0.4569 | 0.3170 |
| 0.1016 | 11.46 | 107000 | 0.4529 | 0.3168 |
| 0.1016 | 11.47 | 107100 | 0.4587 | 0.3173 |
| 0.1016 | 11.48 | 107200 | 0.4505 | 0.3172 |
| 0.1016 | 11.49 | 107300 | 0.4489 | 0.3159 |
| 0.1016 | 11.5 | 107400 | 0.4528 | 0.3130 |
| 0.1001 | 11.51 | 107500 | 0.4473 | 0.3181 |
| 0.1001 | 11.52 | 107600 | 0.4434 | 0.3176 |
| 0.1001 | 11.53 | 107700 | 0.4597 | 0.3186 |
| 0.1001 | 11.54 | 107800 | 0.4351 | 0.3159 |
| 0.1001 | 11.55 | 107900 | 0.4471 | 0.3185 |
| 0.1005 | 11.56 | 108000 | 0.4457 | 0.3191 |
| 0.1005 | 11.57 | 108100 | 0.4544 | 0.3293 |
| 0.1005 | 11.58 | 108200 | 0.4436 | 0.3221 |
| 0.1005 | 11.6 | 108300 | 0.4642 | 0.3270 |
| 0.1005 | 11.61 | 108400 | 0.4474 | 0.3270 |
| 0.1031 | 11.62 | 108500 | 0.4458 | 0.3196 |
| 0.1031 | 11.63 | 108600 | 0.4723 | 0.3205 |
| 0.1031 | 11.64 | 108700 | 0.4507 | 0.3226 |
| 0.1031 | 11.65 | 108800 | 0.4424 | 0.3213 |
| 0.1031 | 11.66 | 108900 | 0.4511 | 0.3213 |
| 0.1014 | 11.67 | 109000 | 0.4422 | 0.3205 |
| 0.1014 | 11.68 | 109100 | 0.4498 | 0.3180 |
| 0.1014 | 11.69 | 109200 | 0.4303 | 0.3167 |
| 0.1014 | 11.7 | 109300 | 0.4483 | 0.3108 |
| 0.1014 | 11.71 | 109400 | 0.4548 | 0.3169 |
| 0.0981 | 11.72 | 109500 | 0.4406 | 0.3122 |
| 0.0981 | 11.73 | 109600 | 0.4293 | 0.3114 |
| 0.0981 | 11.75 | 109700 | 0.4369 | 0.3159 |
| 0.0981 | 11.76 | 109800 | 0.4364 | 0.3164 |
| 0.0981 | 11.77 | 109900 | 0.4358 | 0.3189 |
| 0.1023 | 11.78 | 110000 | 0.4281 | 0.3183 |
| 0.1023 | 11.79 | 110100 | 0.4404 | 0.3159 |
| 0.1023 | 11.8 | 110200 | 0.4471 | 0.3135 |
| 0.1023 | 11.81 | 110300 | 0.4498 | 0.3201 |
| 0.1023 | 11.82 | 110400 | 0.4527 | 0.3161 |
| 0.0988 | 11.83 | 110500 | 0.4440 | 0.3173 |
| 0.0988 | 11.84 | 110600 | 0.4356 | 0.3136 |
| 0.0988 | 11.85 | 110700 | 0.4308 | 0.3135 |
| 0.0988 | 11.86 | 110800 | 0.4294 | 0.3192 |
| 0.0988 | 11.87 | 110900 | 0.4241 | 0.3168 |
| 0.1022 | 11.88 | 111000 | 0.4420 | 0.3157 |
| 0.1022 | 11.9 | 111100 | 0.4313 | 0.3125 |
| 0.1022 | 11.91 | 111200 | 0.4213 | 0.3168 |
| 0.1022 | 11.92 | 111300 | 0.4352 | 0.3135 |
| 0.1022 | 11.93 | 111400 | 0.4297 | 0.3116 |
| 0.1032 | 11.94 | 111500 | 0.4218 | 0.3137 |
| 0.1032 | 11.95 | 111600 | 0.4334 | 0.3123 |
| 0.1032 | 11.96 | 111700 | 0.4373 | 0.3175 |
| 0.1032 | 11.97 | 111800 | 0.4299 | 0.3160 |
| 0.1032 | 11.98 | 111900 | 0.4326 | 0.3189 |
| 0.0969 | 11.99 | 112000 | 0.4208 | 0.3186 |
| 0.0969 | 12.0 | 112100 | 0.4385 | 0.3169 |
| 0.0969 | 12.01 | 112200 | 0.4453 | 0.3156 |
| 0.0969 | 12.02 | 112300 | 0.4596 | 0.3133 |
| 0.0969 | 12.03 | 112400 | 0.4509 | 0.3093 |
| 0.0901 | 12.04 | 112500 | 0.4535 | 0.3138 |
| 0.0901 | 12.06 | 112600 | 0.4371 | 0.3144 |
| 0.0901 | 12.07 | 112700 | 0.4499 | 0.3154 |
| 0.0901 | 12.08 | 112800 | 0.4615 | 0.3198 |
| 0.0901 | 12.09 | 112900 | 0.4523 | 0.3177 |
| 0.0889 | 12.1 | 113000 | 0.4412 | 0.3130 |
| 0.0889 | 12.11 | 113100 | 0.4471 | 0.3181 |
| 0.0889 | 12.12 | 113200 | 0.4530 | 0.3169 |
| 0.0889 | 12.13 | 113300 | 0.4670 | 0.3149 |
| 0.0889 | 12.14 | 113400 | 0.4594 | 0.3141 |
| 0.0917 | 12.15 | 113500 | 0.4623 | 0.3127 |
| 0.0917 | 12.16 | 113600 | 0.4460 | 0.3133 |
| 0.0917 | 12.17 | 113700 | 0.4512 | 0.3191 |
| 0.0917 | 12.18 | 113800 | 0.4681 | 0.3136 |
| 0.0917 | 12.19 | 113900 | 0.4564 | 0.3129 |
| 0.0906 | 12.21 | 114000 | 0.4482 | 0.3107 |
| 0.0906 | 12.22 | 114100 | 0.4595 | 0.3133 |
| 0.0906 | 12.23 | 114200 | 0.4510 | 0.3118 |
| 0.0906 | 12.24 | 114300 | 0.4472 | 0.3131 |
| 0.0906 | 12.25 | 114400 | 0.4499 | 0.3130 |
| 0.0918 | 12.26 | 114500 | 0.4503 | 0.3138 |
| 0.0918 | 12.27 | 114600 | 0.4518 | 0.3135 |
| 0.0918 | 12.28 | 114700 | 0.4493 | 0.3114 |
| 0.0918 | 12.29 | 114800 | 0.4574 | 0.3133 |
| 0.0918 | 12.3 | 114900 | 0.4683 | 0.3200 |
| 0.0869 | 12.31 | 115000 | 0.4608 | 0.3165 |
| 0.0869 | 12.32 | 115100 | 0.4618 | 0.3183 |
| 0.0869 | 12.33 | 115200 | 0.4689 | 0.3173 |
| 0.0869 | 12.34 | 115300 | 0.4681 | 0.3224 |
| 0.0869 | 12.36 | 115400 | 0.4576 | 0.3231 |
| 0.0885 | 12.37 | 115500 | 0.4831 | 0.3176 |
| 0.0885 | 12.38 | 115600 | 0.4602 | 0.3181 |
| 0.0885 | 12.39 | 115700 | 0.4493 | 0.3168 |
| 0.0885 | 12.4 | 115800 | 0.4564 | 0.3149 |
| 0.0885 | 12.41 | 115900 | 0.4585 | 0.3158 |
| 0.091 | 12.42 | 116000 | 0.4713 | 0.3193 |
| 0.091 | 12.43 | 116100 | 0.4581 | 0.3139 |
| 0.091 | 12.44 | 116200 | 0.4637 | 0.3131 |
| 0.091 | 12.45 | 116300 | 0.4572 | 0.3124 |
| 0.091 | 12.46 | 116400 | 0.4489 | 0.3163 |
| 0.0886 | 12.47 | 116500 | 0.4679 | 0.3159 |
| 0.0886 | 12.48 | 116600 | 0.4712 | 0.3151 |
| 0.0886 | 12.49 | 116700 | 0.4750 | 0.3186 |
| 0.0886 | 12.51 | 116800 | 0.4673 | 0.3176 |
| 0.0886 | 12.52 | 116900 | 0.4601 | 0.3113 |
| 0.0917 | 12.53 | 117000 | 0.4341 | 0.3125 |
| 0.0917 | 12.54 | 117100 | 0.4462 | 0.3077 |
| 0.0917 | 12.55 | 117200 | 0.4502 | 0.3099 |
| 0.0917 | 12.56 | 117300 | 0.4482 | 0.3116 |
| 0.0917 | 12.57 | 117400 | 0.4459 | 0.3131 |
| 0.0881 | 12.58 | 117500 | 0.4464 | 0.3122 |
| 0.0881 | 12.59 | 117600 | 0.4471 | 0.3125 |
| 0.0881 | 12.6 | 117700 | 0.4319 | 0.3122 |
| 0.0881 | 12.61 | 117800 | 0.4421 | 0.3103 |
| 0.0881 | 12.62 | 117900 | 0.4326 | 0.3108 |
| 0.0913 | 12.63 | 118000 | 0.4414 | 0.3068 |
| 0.0913 | 12.64 | 118100 | 0.4421 | 0.3083 |
| 0.0913 | 12.66 | 118200 | 0.4449 | 0.3103 |
| 0.0913 | 12.67 | 118300 | 0.4380 | 0.3128 |
| 0.0913 | 12.68 | 118400 | 0.4390 | 0.3136 |
| 0.0921 | 12.69 | 118500 | 0.4452 | 0.3104 |
| 0.0921 | 12.7 | 118600 | 0.4378 | 0.3122 |
| 0.0921 | 12.71 | 118700 | 0.4459 | 0.3080 |
| 0.0921 | 12.72 | 118800 | 0.4369 | 0.3051 |
| 0.0921 | 12.73 | 118900 | 0.4474 | 0.3076 |
| 0.0886 | 12.74 | 119000 | 0.4508 | 0.3066 |
| 0.0886 | 12.75 | 119100 | 0.4456 | 0.3097 |
| 0.0886 | 12.76 | 119200 | 0.4503 | 0.3078 |
| 0.0886 | 12.77 | 119300 | 0.4460 | 0.3081 |
| 0.0886 | 12.78 | 119400 | 0.4404 | 0.3080 |
| 0.0897 | 12.79 | 119500 | 0.4351 | 0.3100 |
| 0.0897 | 12.81 | 119600 | 0.4446 | 0.3120 |
| 0.0897 | 12.82 | 119700 | 0.4407 | 0.3098 |
| 0.0897 | 12.83 | 119800 | 0.4406 | 0.3084 |
| 0.0897 | 12.84 | 119900 | 0.4492 | 0.3067 |
| 0.09 | 12.85 | 120000 | 0.4546 | 0.3098 |
| 0.09 | 12.86 | 120100 | 0.4547 | 0.3074 |
| 0.09 | 12.87 | 120200 | 0.4517 | 0.3111 |
| 0.09 | 12.88 | 120300 | 0.4320 | 0.3064 |
| 0.09 | 12.89 | 120400 | 0.4294 | 0.3072 |
| 0.0898 | 12.9 | 120500 | 0.4412 | 0.3050 |
| 0.0898 | 12.91 | 120600 | 0.4254 | 0.3074 |
| 0.0898 | 12.92 | 120700 | 0.4409 | 0.3071 |
| 0.0898 | 12.93 | 120800 | 0.4362 | 0.3071 |
| 0.0898 | 12.94 | 120900 | 0.4579 | 0.3090 |
| 0.0892 | 12.95 | 121000 | 0.4492 | 0.3059 |
| 0.0892 | 12.97 | 121100 | 0.4404 | 0.3105 |
| 0.0892 | 12.98 | 121200 | 0.4365 | 0.3066 |
| 0.0892 | 12.99 | 121300 | 0.4368 | 0.3048 |
| 0.0892 | 13.0 | 121400 | 0.4410 | 0.3033 |
| 0.085 | 13.01 | 121500 | 0.4450 | 0.3047 |
| 0.085 | 13.02 | 121600 | 0.4633 | 0.3013 |
| 0.085 | 13.03 | 121700 | 0.4600 | 0.3054 |
| 0.085 | 13.04 | 121800 | 0.4541 | 0.3047 |
| 0.085 | 13.05 | 121900 | 0.4546 | 0.3058 |
| 0.0791 | 13.06 | 122000 | 0.4536 | 0.3045 |
| 0.0791 | 13.07 | 122100 | 0.4589 | 0.3066 |
| 0.0791 | 13.08 | 122200 | 0.4581 | 0.3057 |
| 0.0791 | 13.09 | 122300 | 0.4546 | 0.3048 |
| 0.0791 | 13.1 | 122400 | 0.4673 | 0.3006 |
| 0.0789 | 13.12 | 122500 | 0.4551 | 0.3019 |
| 0.0789 | 13.13 | 122600 | 0.4467 | 0.3025 |
| 0.0789 | 13.14 | 122700 | 0.4593 | 0.3015 |
| 0.0789 | 13.15 | 122800 | 0.4598 | 0.3037 |
| 0.0789 | 13.16 | 122900 | 0.4532 | 0.3038 |
| 0.077 | 13.17 | 123000 | 0.4607 | 0.3015 |
| 0.077 | 13.18 | 123100 | 0.4385 | 0.3005 |
| 0.077 | 13.19 | 123200 | 0.4590 | 0.3041 |
| 0.077 | 13.2 | 123300 | 0.4359 | 0.3047 |
| 0.077 | 13.21 | 123400 | 0.4458 | 0.3039 |
| 0.0771 | 13.22 | 123500 | 0.4506 | 0.3075 |
| 0.0771 | 13.23 | 123600 | 0.4457 | 0.3079 |
| 0.0771 | 13.24 | 123700 | 0.4448 | 0.3048 |
| 0.0771 | 13.25 | 123800 | 0.4398 | 0.3036 |
| 0.0771 | 13.27 | 123900 | 0.4510 | 0.3055 |
| 0.0804 | 13.28 | 124000 | 0.4507 | 0.3059 |
| 0.0804 | 13.29 | 124100 | 0.4544 | 0.3076 |
| 0.0804 | 13.3 | 124200 | 0.4534 | 0.3073 |
| 0.0804 | 13.31 | 124300 | 0.4441 | 0.3061 |
| 0.0804 | 13.32 | 124400 | 0.4391 | 0.3075 |
| 0.0774 | 13.33 | 124500 | 0.4527 | 0.3063 |
| 0.0774 | 13.34 | 124600 | 0.4638 | 0.3057 |
| 0.0774 | 13.35 | 124700 | 0.4541 | 0.3064 |
| 0.0774 | 13.36 | 124800 | 0.4617 | 0.3078 |
| 0.0774 | 13.37 | 124900 | 0.4584 | 0.3041 |
| 0.0795 | 13.38 | 125000 | 0.4663 | 0.3032 |
| 0.0795 | 13.39 | 125100 | 0.4546 | 0.3025 |
| 0.0795 | 13.4 | 125200 | 0.4616 | 0.3021 |
| 0.0795 | 13.42 | 125300 | 0.4603 | 0.3016 |
| 0.0795 | 13.43 | 125400 | 0.4616 | 0.3040 |
| 0.0791 | 13.44 | 125500 | 0.4548 | 0.3021 |
| 0.0791 | 13.45 | 125600 | 0.4560 | 0.3025 |
| 0.0791 | 13.46 | 125700 | 0.4516 | 0.3037 |
| 0.0791 | 13.47 | 125800 | 0.4500 | 0.3013 |
| 0.0791 | 13.48 | 125900 | 0.4540 | 0.3009 |
| 0.0776 | 13.49 | 126000 | 0.4581 | 0.3026 |
| 0.0776 | 13.5 | 126100 | 0.4598 | 0.3028 |
| 0.0776 | 13.51 | 126200 | 0.4587 | 0.3038 |
| 0.0776 | 13.52 | 126300 | 0.4514 | 0.3024 |
| 0.0776 | 13.53 | 126400 | 0.4495 | 0.3036 |
| 0.0793 | 13.54 | 126500 | 0.4556 | 0.3016 |
| 0.0793 | 13.55 | 126600 | 0.4603 | 0.3025 |
| 0.0793 | 13.57 | 126700 | 0.4496 | 0.2995 |
| 0.0793 | 13.58 | 126800 | 0.4483 | 0.2969 |
| 0.0793 | 13.59 | 126900 | 0.4462 | 0.2980 |
| 0.0816 | 13.6 | 127000 | 0.4521 | 0.2982 |
| 0.0816 | 13.61 | 127100 | 0.4580 | 0.3019 |
| 0.0816 | 13.62 | 127200 | 0.4669 | 0.3009 |
| 0.0816 | 13.63 | 127300 | 0.4513 | 0.3017 |
| 0.0816 | 13.64 | 127400 | 0.4602 | 0.3015 |
| 0.0779 | 13.65 | 127500 | 0.4592 | 0.2998 |
| 0.0779 | 13.66 | 127600 | 0.4700 | 0.2981 |
| 0.0779 | 13.67 | 127700 | 0.4727 | 0.2978 |
| 0.0779 | 13.68 | 127800 | 0.4600 | 0.2983 |
| 0.0779 | 13.69 | 127900 | 0.4472 | 0.2978 |
| 0.0779 | 13.7 | 128000 | 0.4483 | 0.2984 |
| 0.0779 | 13.72 | 128100 | 0.4512 | 0.2968 |
| 0.0779 | 13.73 | 128200 | 0.4549 | 0.2988 |
| 0.0779 | 13.74 | 128300 | 0.4576 | 0.2992 |
| 0.0779 | 13.75 | 128400 | 0.4400 | 0.2974 |
| 0.0793 | 13.76 | 128500 | 0.4433 | 0.3009 |
| 0.0793 | 13.77 | 128600 | 0.4456 | 0.2982 |
| 0.0793 | 13.78 | 128700 | 0.4560 | 0.3019 |
| 0.0793 | 13.79 | 128800 | 0.4551 | 0.3008 |
| 0.0793 | 13.8 | 128900 | 0.4513 | 0.3007 |
| 0.0769 | 13.81 | 129000 | 0.4518 | 0.3008 |
| 0.0769 | 13.82 | 129100 | 0.4567 | 0.2981 |
| 0.0769 | 13.83 | 129200 | 0.4437 | 0.2985 |
| 0.0769 | 13.84 | 129300 | 0.4424 | 0.2970 |
| 0.0769 | 13.85 | 129400 | 0.4423 | 0.3010 |
| 0.0785 | 13.87 | 129500 | 0.4495 | 0.2999 |
| 0.0785 | 13.88 | 129600 | 0.4483 | 0.2975 |
| 0.0785 | 13.89 | 129700 | 0.4485 | 0.2982 |
| 0.0785 | 13.9 | 129800 | 0.4429 | 0.2972 |
| 0.0785 | 13.91 | 129900 | 0.4430 | 0.2958 |
| 0.0792 | 13.92 | 130000 | 0.4495 | 0.2954 |
| 0.0792 | 13.93 | 130100 | 0.4485 | 0.2947 |
| 0.0792 | 13.94 | 130200 | 0.4395 | 0.2972 |
| 0.0792 | 13.95 | 130300 | 0.4379 | 0.2973 |
| 0.0792 | 13.96 | 130400 | 0.4428 | 0.2989 |
| 0.0795 | 13.97 | 130500 | 0.4385 | 0.3000 |
| 0.0795 | 13.98 | 130600 | 0.4490 | 0.2983 |
| 0.0795 | 13.99 | 130700 | 0.4568 | 0.2970 |
| 0.0795 | 14.0 | 130800 | 0.4482 | 0.2963 |
| 0.0795 | 14.01 | 130900 | 0.4479 | 0.2962 |
| 0.075 | 14.03 | 131000 | 0.4565 | 0.2968 |
| 0.075 | 14.04 | 131100 | 0.4623 | 0.2962 |
| 0.075 | 14.05 | 131200 | 0.4617 | 0.2965 |
| 0.075 | 14.06 | 131300 | 0.4687 | 0.2949 |
| 0.075 | 14.07 | 131400 | 0.4718 | 0.2929 |
| 0.0709 | 14.08 | 131500 | 0.4720 | 0.2945 |
| 0.0709 | 14.09 | 131600 | 0.4604 | 0.2953 |
| 0.0709 | 14.1 | 131700 | 0.4655 | 0.2955 |
| 0.0709 | 14.11 | 131800 | 0.4695 | 0.2958 |
| 0.0709 | 14.12 | 131900 | 0.4666 | 0.2945 |
| 0.0705 | 14.13 | 132000 | 0.4605 | 0.2959 |
| 0.0705 | 14.14 | 132100 | 0.4581 | 0.2947 |
| 0.0705 | 14.15 | 132200 | 0.4597 | 0.2948 |
| 0.0705 | 14.16 | 132300 | 0.4612 | 0.2943 |
| 0.0705 | 14.18 | 132400 | 0.4611 | 0.2959 |
| 0.0727 | 14.19 | 132500 | 0.4569 | 0.2958 |
| 0.0727 | 14.2 | 132600 | 0.4556 | 0.2951 |
| 0.0727 | 14.21 | 132700 | 0.4597 | 0.2955 |
| 0.0727 | 14.22 | 132800 | 0.4472 | 0.2935 |
| 0.0727 | 14.23 | 132900 | 0.4573 | 0.2943 |
| 0.0723 | 14.24 | 133000 | 0.4572 | 0.2943 |
| 0.0723 | 14.25 | 133100 | 0.4582 | 0.2956 |
| 0.0723 | 14.26 | 133200 | 0.4599 | 0.2968 |
| 0.0723 | 14.27 | 133300 | 0.4633 | 0.2962 |
| 0.0723 | 14.28 | 133400 | 0.4604 | 0.2972 |
| 0.071 | 14.29 | 133500 | 0.4587 | 0.2971 |
| 0.071 | 14.3 | 133600 | 0.4598 | 0.2973 |
| 0.071 | 14.31 | 133700 | 0.4579 | 0.2976 |
| 0.071 | 14.33 | 133800 | 0.4539 | 0.2969 |
| 0.071 | 14.34 | 133900 | 0.4628 | 0.2961 |
| 0.0703 | 14.35 | 134000 | 0.4627 | 0.2974 |
| 0.0703 | 14.36 | 134100 | 0.4611 | 0.2974 |
| 0.0703 | 14.37 | 134200 | 0.4607 | 0.2977 |
| 0.0703 | 14.38 | 134300 | 0.4638 | 0.2983 |
| 0.0703 | 14.39 | 134400 | 0.4628 | 0.2969 |
| 0.0736 | 14.4 | 134500 | 0.4543 | 0.2965 |
| 0.0736 | 14.41 | 134600 | 0.4585 | 0.2963 |
| 0.0736 | 14.42 | 134700 | 0.4636 | 0.2950 |
| 0.0736 | 14.43 | 134800 | 0.4636 | 0.2964 |
| 0.0736 | 14.44 | 134900 | 0.4630 | 0.2958 |
| 0.0715 | 14.45 | 135000 | 0.4611 | 0.2968 |
| 0.0715 | 14.46 | 135100 | 0.4633 | 0.2966 |
| 0.0715 | 14.48 | 135200 | 0.4664 | 0.2954 |
| 0.0715 | 14.49 | 135300 | 0.4670 | 0.2945 |
| 0.0715 | 14.5 | 135400 | 0.4638 | 0.2961 |
| 0.073 | 14.51 | 135500 | 0.4635 | 0.2965 |
| 0.073 | 14.52 | 135600 | 0.4639 | 0.2956 |
| 0.073 | 14.53 | 135700 | 0.4617 | 0.2948 |
| 0.073 | 14.54 | 135800 | 0.4609 | 0.2933 |
| 0.073 | 14.55 | 135900 | 0.4614 | 0.2947 |
| 0.0717 | 14.56 | 136000 | 0.4567 | 0.2958 |
| 0.0717 | 14.57 | 136100 | 0.4615 | 0.2934 |
| 0.0717 | 14.58 | 136200 | 0.4606 | 0.2929 |
| 0.0717 | 14.59 | 136300 | 0.4652 | 0.2934 |
| 0.0717 | 14.6 | 136400 | 0.4664 | 0.2934 |
| 0.0717 | 14.61 | 136500 | 0.4657 | 0.2923 |
| 0.0717 | 14.63 | 136600 | 0.4633 | 0.2931 |
| 0.0717 | 14.64 | 136700 | 0.4624 | 0.2943 |
| 0.0717 | 14.65 | 136800 | 0.4615 | 0.2949 |
| 0.0717 | 14.66 | 136900 | 0.4619 | 0.2930 |
| 0.0707 | 14.67 | 137000 | 0.4608 | 0.2936 |
| 0.0707 | 14.68 | 137100 | 0.4615 | 0.2945 |
| 0.0707 | 14.69 | 137200 | 0.4605 | 0.2941 |
| 0.0707 | 14.7 | 137300 | 0.4598 | 0.2931 |
| 0.0707 | 14.71 | 137400 | 0.4596 | 0.2943 |
| 0.0694 | 14.72 | 137500 | 0.4624 | 0.2927 |
| 0.0694 | 14.73 | 137600 | 0.4614 | 0.2931 |
| 0.0694 | 14.74 | 137700 | 0.4621 | 0.2924 |
| 0.0694 | 14.75 | 137800 | 0.4589 | 0.2920 |
| 0.0694 | 14.76 | 137900 | 0.4590 | 0.2926 |
| 0.0706 | 14.78 | 138000 | 0.4588 | 0.2931 |
| 0.0706 | 14.79 | 138100 | 0.4583 | 0.2928 |
| 0.0706 | 14.8 | 138200 | 0.4552 | 0.2934 |
| 0.0706 | 14.81 | 138300 | 0.4551 | 0.2923 |
| 0.0706 | 14.82 | 138400 | 0.4555 | 0.2927 |
| 0.0717 | 14.83 | 138500 | 0.4547 | 0.2930 |
| 0.0717 | 14.84 | 138600 | 0.4546 | 0.2930 |
| 0.0717 | 14.85 | 138700 | 0.4553 | 0.2934 |
| 0.0717 | 14.86 | 138800 | 0.4554 | 0.2924 |
| 0.0717 | 14.87 | 138900 | 0.4573 | 0.2924 |
| 0.0722 | 14.88 | 139000 | 0.4582 | 0.2927 |
| 0.0722 | 14.89 | 139100 | 0.4586 | 0.2926 |
| 0.0722 | 14.9 | 139200 | 0.4570 | 0.2926 |
| 0.0722 | 14.91 | 139300 | 0.4571 | 0.2923 |
| 0.0722 | 14.93 | 139400 | 0.4564 | 0.2925 |
| 0.0698 | 14.94 | 139500 | 0.4573 | 0.2927 |
| 0.0698 | 14.95 | 139600 | 0.4574 | 0.2927 |
| 0.0698 | 14.96 | 139700 | 0.4573 | 0.2927 |
| 0.0698 | 14.97 | 139800 | 0.4576 | 0.2921 |
| 0.0698 | 14.98 | 139900 | 0.4578 | 0.2923 |
| 0.0705 | 14.99 | 140000 | 0.4579 | 0.2928 |
| 0.0705 | 15.0 | 140100 | 0.4578 | 0.2927 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"language": ["sv-SE"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "model-index": [{"name": "wav2vec2-speechdat", "results": []}]} | birgermoell/wav2vec2-speechdat | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | birgermoell/wav2vec2-swe-asr-large | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice). The training data amounts to 402 MB.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.91 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1KkD4PeZwnIwxxxOP1bUE7XTZMK7-SzRj?usp=sharing)
| {"language": "sv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Swedish by Birger Moell", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 36.91, "name": "Test WER"}]}]}]} | birgermoell/wav2vec2-swedish-common-voice | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 530615016
- CO2 Emissions (in grams): 2.2247356264808964
## Validation Metrics
- Loss: 0.7859578132629395
- Accuracy: 0.676854818831649
- Macro F1: 0.3297126297995653
- Micro F1: 0.676854818831649
- Weighted F1: 0.6429522696884535
- Macro Precision: 0.33152557743856437
- Micro Precision: 0.676854818831649
- Weighted Precision: 0.6276125515413322
- Macro Recall: 0.33784302289888885
- Micro Recall: 0.676854818831649
- Weighted Recall: 0.676854818831649
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/bitmorse/autonlp-ks-530615016
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bitmorse/autonlp-ks-530615016", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["bitmorse/autonlp-data-ks"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.2247356264808964} | bitmorse/autonlp-ks-530615016 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:bitmorse/autonlp-data-ks",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
feature-extraction | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kickstarter-distilbert-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "kickstarter-distilbert-model", "results": []}]} | bitmorse/kickstarter-distilbert-model | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | bitsanlp/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers |
# AlephBERT
## Hebrew Language Model
State-of-the-art language model for Hebrew.
Based on Google's BERT architecture [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805).
#### How to use
```python
from transformers import BertModel, BertTokenizerFast
alephbert_tokenizer = BertTokenizerFast.from_pretrained('onlplab/alephbert-base')
alephbert = BertModel.from_pretrained('onlplab/alephbert-base')
# if not finetuning - disable dropout
alephbert.eval()
```
## Training data
1. OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/) Hebrew section (10 GB text, 20 million sentences).
2. Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/) (650 MB text, 3 million sentences).
3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences).
## Training procedure
Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure.
Since the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only.
To optimize training time we split the data into 4 sections based on max number of tokens:
1. num tokens < 32 (70M sentences)
2. 32 <= num tokens < 64 (12M sentences)
3. 64 <= num tokens < 128 (10M sentences)
4. 128 <= num tokens < 512 (1.5M sentences)
Each section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs.
Total training time was 8 days.
| {"language": ["he"], "license": "apache-2.0", "tags": ["language model"], "datasets": ["oscar", "wikipedia", "twitter"]} | biu-nlp/alephbert-base | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"language model",
"he",
"dataset:oscar",
"dataset:wikipedia",
"dataset:twitter",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
fill-mask | transformers |
# Cross-Document Language Modeling
CDLM: Cross-Document Language Modeling.
Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew E Peters, Arie Cattan and Ido Dagan. In EMNLP Findings, 2021. [PDF](https://arxiv.org/pdf/2101.00406.pdf)
Please note that during our pretraining we used the document and sentence separators, which you might want to add to your data. The document and sentence separators are `<doc-s>`, `</doc-s>` (the last two tokens in the vocabulary), and `<s>`, `</s>`, respectively.
```python
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('biu-nlp/cdlm')
model = AutoModel.from_pretrained('biu-nlp/cdlm')
```
The original repo is [here](https://github.com/aviclu/CDLM).
If you find our work useful, please cite the paper as:
```python
@article{caciularu2021cross,
title={Cross-Document Language Modeling},
author={Caciularu, Avi and Cohan, Arman and Beltagy, Iz and Peters, Matthew E and Cattan, Arie and Dagan, Ido},
journal={Findings of the Association for Computational Linguistics: EMNLP 2021},
year={2021}
}
``` | {"language": "en", "license": "apache-2.0", "tags": ["longformer", "cdlm"], "inference": false} | biu-nlp/cdlm | null | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"cdlm",
"en",
"arxiv:2101.00406",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# SuperPAL model
Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline
Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan, 2021. [PDF](https://arxiv.org/pdf/2009.00590)
**How to use?**
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("biu-nlp/superpal")
model = AutoModelForSequenceClassification.from_pretrained("biu-nlp/superpal")
```
The original repo is [here](https://github.com/oriern/SuperPAL).
If you find our work useful, please cite the paper as:
```python
@inproceedings{ernst-etal-2021-summary,
title = "Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline",
author = "Ernst, Ori and Shapira, Ori and Pasunuru, Ramakanth and Lepioshkin, Michael and Goldberger, Jacob and Bansal, Mohit and Dagan, Ido",
booktitle = "Proceedings of the 25th Conference on Computational Natural Language Learning",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.conll-1.25",
pages = "310--322"
}
``` | {"widget": [{"text": "Prime Minister Hun Sen insisted that talks take place in Cambodia. </s><s> Cambodian leader Hun Sen rejected opposition parties' demands for talks outside the country."}]} | biu-nlp/superpal | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2009.00590",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | bjkim/bert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bjkim/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | bjornvandijkman/testing | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.