modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
unknown | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16,451 | null | https://www.nace.org/network/members/profile?UserKey=461a690a-bff6-4e4c-be63-ea8e39264459
https://www.nace.org/network/members/profile?UserKey=b4a6a66a-fb8a-4f2b-8af9-04f003ad9d46
https://www.nace.org/network/members/profile?UserKey=24544ab2-551d-42aa-adbe-7a1c1d68fd9c
https://www.nace.org/network/members/profile?UserKey=3e8035d5-056a-482d-9010-9883e5990f4a
https://www.nace.org/network/members/profile?UserKey=d7241c69-28c4-4146-a077-a00cc2c9ccf5
https://www.nace.org/network/members/profile?UserKey=2c58c2fb-13a4-4e5a-b044-f467bb295d83
https://www.nace.org/network/members/profile?UserKey=dd8a290c-e53a-4b56-9a17-d35dbcb6b8bd
https://www.nace.org/network/members/profile?UserKey=0e96a1af-91f4-496a-af02-6d753a1bbded |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | https://ragbrai.com/groups/hd-movie-watch-french-exit-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-nobody-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-voyagers-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-godzilla-vs-kong-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-raya-and-the-last-dragon-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-mortal-kombat-2021-full-movie-online-for-free/
https://ragbrai.com/groups/hd-movie-watch-the-father-2021-full-movie-online-for-free/ |
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer intermediate model (B6-6-6 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `intermediate` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/intermediate-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer intermediate model (B6-6-6 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = FunneModel.from_pretrained("funnel-transformer/intermediate")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/intermediate")
model = TFFunnelModel.from_pretrained("funnel-transformer/intermediatesmall")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 580 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer large model (B8-8-8 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `large` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/large-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-ner | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer large model (B8-8-8 with decoder)
Pretrained model on English language using a similar objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = FunneModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/large")
model = TFFunnelModel.from_pretrained("funnel-transformer/large")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer medium model (B6-3x2-3x2 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `medium` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer medium model (B6-3x2-3x2 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = FunneModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium")
model = TFFunnelModel.from_pretrained("funnel-transformer/medium")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 54 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer small model (B4-4-4 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `small` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/small-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer small model (B4-4-4 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = FunneModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/small")
model = TFFunnelModel.from_pretrained("funnel-transformer/small")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"has_space"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19,850 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer xlarge model (B10-10-10 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `xlarge` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-da | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 449 | null | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer xlarge model (B10-10-10 with decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = FunneModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge")
model = TFFunnelModel.from_pretrained("funnel-transformer/xlarge")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 62 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 132 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 334 | 0.1500 | 24.5024 | 21.4979 | 24.0227 | 24.0303 | 19.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,862 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc-headline
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 167 | 3.6454 | 22.4311 | 5.9878 | 20.118 | 20.482 | 18.9009 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 855 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"Arabic",
"Dialect",
"Egyptian",
"Gulf",
"Levantine",
"Classical Arabic",
"MSA",
"Modern Standard Arabic",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20,880 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 128 | 2.9003 | 19.4784 | 2.8529 | 14.7786 | 15.0614 | 18.9825 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0964 | 1.0 | 2346 | 7.0532 |
| 6.9055 | 2.0 | 4692 | 6.8710 |
| 6.8574 | 3.0 | 7038 | 6.8917 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5543972545286807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Matthews Correlation: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5256 | 1.0 | 535 | 0.5419 | 0.4248 |
| 0.3486 | 2.0 | 1070 | 0.5187 | 0.4999 |
| 0.2406 | 3.0 | 1605 | 0.6580 | 0.5054 |
| 0.1692 | 4.0 | 2140 | 0.7455 | 0.5403 |
| 0.1343 | 5.0 | 2675 | 0.8273 | 0.5544 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 133 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5571 | 1.0 | 2249 | 6.4684 |
| 6.1921 | 2.0 | 4498 | 6.1984 |
| 6.0016 | 3.0 | 6747 | 6.1112 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CBreit00/DialoGPT_small_Rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-es-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-es-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - ES dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1788
- Wer: 1.0239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.02 | 100 | 6.6465 | 1.0 |
| No log | 0.04 | 200 | 3.0150 | 1.0 |
| No log | 0.05 | 300 | 2.8622 | 1.0003 |
| No log | 0.07 | 400 | 0.9506 | 0.9771 |
| 5.1598 | 0.09 | 500 | 0.4883 | 1.0009 |
| 5.1598 | 0.11 | 600 | 0.3893 | 1.0203 |
| 5.1598 | 0.13 | 700 | 0.3417 | 1.0283 |
| 5.1598 | 0.14 | 800 | 0.3352 | 1.0335 |
| 5.1598 | 0.16 | 900 | 0.2987 | 1.0168 |
| 0.3671 | 0.18 | 1000 | 0.2921 | 1.0159 |
| 0.3671 | 0.2 | 1100 | 0.2770 | 1.0096 |
| 0.3671 | 0.22 | 1200 | 0.2790 | 1.0398 |
| 0.3671 | 0.24 | 1300 | 0.2659 | 1.0190 |
| 0.3671 | 0.25 | 1400 | 0.2657 | 1.0528 |
| 0.289 | 0.27 | 1500 | 0.2556 | 1.0301 |
| 0.289 | 0.29 | 1600 | 0.2514 | 1.0193 |
| 0.289 | 0.31 | 1700 | 0.2708 | 1.0699 |
| 0.289 | 0.33 | 1800 | 0.2455 | 1.0723 |
| 0.289 | 0.34 | 1900 | 0.2456 | 1.0100 |
| 0.271 | 0.36 | 2000 | 0.2338 | 1.0533 |
| 0.271 | 0.38 | 2100 | 0.2479 | 1.0128 |
| 0.271 | 0.4 | 2200 | 0.2483 | 1.0386 |
| 0.271 | 0.42 | 2300 | 0.2436 | 1.0528 |
| 0.271 | 0.43 | 2400 | 0.2382 | 1.0476 |
| 0.2634 | 0.45 | 2500 | 0.2329 | 1.0680 |
| 0.2634 | 0.47 | 2600 | 0.2433 | 1.0581 |
| 0.2634 | 0.49 | 2700 | 0.2354 | 1.0641 |
| 0.2634 | 0.51 | 2800 | 0.2318 | 1.0504 |
| 0.2634 | 0.52 | 2900 | 0.2325 | 1.0500 |
| 0.2522 | 0.54 | 3000 | 0.2344 | 1.0380 |
| 0.2522 | 0.56 | 3100 | 0.2244 | 1.0663 |
| 0.2522 | 0.58 | 3200 | 0.2340 | 1.0647 |
| 0.2522 | 0.6 | 3300 | 0.2288 | 1.0538 |
| 0.2522 | 0.61 | 3400 | 0.2212 | 1.0614 |
| 0.2468 | 0.63 | 3500 | 0.2487 | 1.0557 |
| 0.2468 | 0.65 | 3600 | 0.2330 | 1.0510 |
| 0.2468 | 0.67 | 3700 | 0.2308 | 1.0506 |
| 0.2468 | 0.69 | 3800 | 0.2320 | 1.0451 |
| 0.2468 | 0.71 | 3900 | 0.2261 | 1.0701 |
| 0.2505 | 0.72 | 4000 | 0.2281 | 1.0713 |
| 0.2505 | 0.74 | 4100 | 0.2277 | 1.0741 |
| 0.2505 | 0.76 | 4200 | 0.2253 | 1.0814 |
| 0.2505 | 0.78 | 4300 | 0.2215 | 1.0437 |
| 0.2505 | 0.8 | 4400 | 0.2220 | 1.0557 |
| 0.2434 | 0.81 | 4500 | 0.2184 | 1.0533 |
| 0.2434 | 0.83 | 4600 | 0.2222 | 1.0819 |
| 0.2434 | 0.85 | 4700 | 0.2162 | 1.0238 |
| 0.2434 | 0.87 | 4800 | 0.2132 | 1.0457 |
| 0.2434 | 0.89 | 4900 | 0.2068 | 1.0611 |
| 0.2347 | 0.9 | 5000 | 0.2166 | 1.0332 |
| 0.2347 | 0.92 | 5100 | 0.2087 | 1.0433 |
| 0.2347 | 0.94 | 5200 | 0.2100 | 1.0292 |
| 0.2347 | 0.96 | 5300 | 0.2067 | 1.0734 |
| 0.2347 | 0.98 | 5400 | 0.2148 | 1.0279 |
| 0.2333 | 0.99 | 5500 | 0.2125 | 1.0277 |
| 0.2333 | 1.01 | 5600 | 0.2054 | 1.0453 |
| 0.2333 | 1.03 | 5700 | 0.2091 | 1.0557 |
| 0.2333 | 1.05 | 5800 | 0.2086 | 1.0239 |
| 0.2333 | 1.07 | 5900 | 0.2051 | 1.0645 |
| 0.2087 | 1.09 | 6000 | 0.2103 | 1.0240 |
| 0.2087 | 1.1 | 6100 | 0.2145 | 1.0197 |
| 0.2087 | 1.12 | 6200 | 0.2136 | 1.0248 |
| 0.2087 | 1.14 | 6300 | 0.2045 | 1.0443 |
| 0.2087 | 1.16 | 6400 | 0.2089 | 1.0397 |
| 0.2013 | 1.18 | 6500 | 0.2012 | 1.0654 |
| 0.2013 | 1.19 | 6600 | 0.2054 | 1.0414 |
| 0.2013 | 1.21 | 6700 | 0.2081 | 1.0632 |
| 0.2013 | 1.23 | 6800 | 0.2104 | 1.0190 |
| 0.2013 | 1.25 | 6900 | 0.2045 | 1.0813 |
| 0.2092 | 1.27 | 7000 | 0.2096 | 1.0751 |
| 0.2092 | 1.28 | 7100 | 0.2103 | 1.0328 |
| 0.2092 | 1.3 | 7200 | 0.2044 | 1.0011 |
| 0.2092 | 1.32 | 7300 | 0.2089 | 1.0260 |
| 0.2092 | 1.34 | 7400 | 0.2063 | 1.0551 |
| 0.2076 | 1.36 | 7500 | 0.2029 | 1.0075 |
| 0.2076 | 1.37 | 7600 | 0.2040 | 1.0528 |
| 0.2076 | 1.39 | 7700 | 0.2075 | 1.0398 |
| 0.2076 | 1.41 | 7800 | 0.2023 | 1.0231 |
| 0.2076 | 1.43 | 7900 | 0.2049 | 1.0318 |
| 0.2028 | 1.45 | 8000 | 0.2072 | 1.0763 |
| 0.2028 | 1.47 | 8100 | 0.2075 | 1.0762 |
| 0.2028 | 1.48 | 8200 | 0.2052 | 1.0838 |
| 0.2028 | 1.5 | 8300 | 0.2053 | 1.0407 |
| 0.2028 | 1.52 | 8400 | 0.2066 | 1.0266 |
| 0.2025 | 1.54 | 8500 | 0.2037 | 1.0628 |
| 0.2025 | 1.56 | 8600 | 0.2010 | 1.0351 |
| 0.2025 | 1.57 | 8700 | 0.1961 | 1.0812 |
| 0.2025 | 1.59 | 8800 | 0.1963 | 1.0868 |
| 0.2025 | 1.61 | 8900 | 0.2022 | 1.0710 |
| 0.1997 | 1.63 | 9000 | 0.2051 | 1.0764 |
| 0.1997 | 1.65 | 9100 | 0.1987 | 1.0581 |
| 0.1997 | 1.66 | 9200 | 0.2051 | 1.0611 |
| 0.1997 | 1.68 | 9300 | 0.1999 | 1.0808 |
| 0.1997 | 1.7 | 9400 | 0.1972 | 1.0703 |
| 0.1983 | 1.72 | 9500 | 0.1961 | 1.0584 |
| 0.1983 | 1.74 | 9600 | 0.2031 | 1.0938 |
| 0.1983 | 1.75 | 9700 | 0.2019 | 1.0891 |
| 0.1983 | 1.77 | 9800 | 0.2006 | 1.0542 |
| 0.1983 | 1.79 | 9900 | 0.1925 | 1.0627 |
| 0.1961 | 1.81 | 10000 | 0.1976 | 1.0751 |
| 0.1961 | 1.83 | 10100 | 0.2051 | 1.0611 |
| 0.1961 | 1.85 | 10200 | 0.2037 | 1.0656 |
| 0.1961 | 1.86 | 10300 | 0.2025 | 1.0291 |
| 0.1961 | 1.88 | 10400 | 0.1977 | 1.0525 |
| 0.2025 | 1.9 | 10500 | 0.2030 | 1.0670 |
| 0.2025 | 1.92 | 10600 | 0.1980 | 1.0765 |
| 0.2025 | 1.94 | 10700 | 0.1975 | 1.0254 |
| 0.2025 | 1.95 | 10800 | 0.1986 | 1.0636 |
| 0.2025 | 1.97 | 10900 | 0.1956 | 1.0352 |
| 0.2025 | 1.99 | 11000 | 0.1954 | 1.0265 |
| 0.2025 | 2.01 | 11100 | 0.1957 | 1.0752 |
| 0.2025 | 2.03 | 11200 | 0.1943 | 1.0784 |
| 0.2025 | 2.04 | 11300 | 0.1898 | 1.0341 |
| 0.2025 | 2.06 | 11400 | 0.1921 | 1.0301 |
| 0.1805 | 2.08 | 11500 | 0.1910 | 1.0230 |
| 0.1805 | 2.1 | 11600 | 0.1961 | 1.0203 |
| 0.1805 | 2.12 | 11700 | 0.1973 | 1.0776 |
| 0.1805 | 2.13 | 11800 | 0.1876 | 1.0788 |
| 0.1805 | 2.15 | 11900 | 0.1934 | 1.0251 |
| 0.177 | 2.17 | 12000 | 0.1967 | 1.0340 |
| 0.177 | 2.19 | 12100 | 0.1932 | 1.0131 |
| 0.177 | 2.21 | 12200 | 0.1926 | 1.0078 |
| 0.177 | 2.23 | 12300 | 0.1947 | 0.9991 |
| 0.177 | 2.24 | 12400 | 0.1914 | 1.0213 |
| 0.1782 | 2.26 | 12500 | 0.1962 | 0.9882 |
| 0.1782 | 2.28 | 12600 | 0.1960 | 1.0562 |
| 0.1782 | 2.3 | 12700 | 0.2006 | 1.0401 |
| 0.1782 | 2.32 | 12800 | 0.1950 | 1.0688 |
| 0.1782 | 2.33 | 12900 | 0.1920 | 1.0435 |
| 0.1796 | 2.35 | 13000 | 0.1926 | 1.0667 |
| 0.1796 | 2.37 | 13100 | 0.1949 | 1.0859 |
| 0.1796 | 2.39 | 13200 | 0.1932 | 1.0670 |
| 0.1796 | 2.41 | 13300 | 0.1882 | 1.0663 |
| 0.1796 | 2.42 | 13400 | 0.1877 | 1.0760 |
| 0.1775 | 2.44 | 13500 | 0.1893 | 1.0859 |
| 0.1775 | 2.46 | 13600 | 0.1936 | 1.0702 |
| 0.1775 | 2.48 | 13700 | 0.1871 | 1.0414 |
| 0.1775 | 2.5 | 13800 | 0.1917 | 1.0430 |
| 0.1775 | 2.51 | 13900 | 0.1922 | 1.0422 |
| 0.1778 | 2.53 | 14000 | 0.1875 | 1.0585 |
| 0.1778 | 2.55 | 14100 | 0.1876 | 1.0603 |
| 0.1778 | 2.57 | 14200 | 0.1888 | 1.0628 |
| 0.1778 | 2.59 | 14300 | 0.1948 | 1.0782 |
| 0.1778 | 2.6 | 14400 | 0.1942 | 1.0695 |
| 0.1784 | 2.62 | 14500 | 0.1842 | 1.0863 |
| 0.1784 | 2.64 | 14600 | 0.1850 | 1.0543 |
| 0.1784 | 2.66 | 14700 | 0.1824 | 1.0683 |
| 0.1784 | 2.68 | 14800 | 0.1888 | 1.0693 |
| 0.1784 | 2.7 | 14900 | 0.1871 | 1.0175 |
| 0.1753 | 2.71 | 15000 | 0.1889 | 1.0549 |
| 0.1753 | 2.73 | 15100 | 0.1865 | 1.0544 |
| 0.1753 | 2.75 | 15200 | 0.1918 | 1.0726 |
| 0.1753 | 2.77 | 15300 | 0.1964 | 1.0915 |
| 0.1753 | 2.79 | 15400 | 0.1900 | 1.0610 |
| 0.1768 | 2.8 | 15500 | 0.1894 | 1.0763 |
| 0.1768 | 2.82 | 15600 | 0.1882 | 1.0548 |
| 0.1768 | 2.84 | 15700 | 0.1861 | 1.0902 |
| 0.1768 | 2.86 | 15800 | 0.1860 | 1.0551 |
| 0.1768 | 2.88 | 15900 | 0.1879 | 1.0581 |
| 0.1761 | 2.89 | 16000 | 0.1899 | 1.0544 |
| 0.1761 | 2.91 | 16100 | 0.1860 | 1.0530 |
| 0.1761 | 2.93 | 16200 | 0.1894 | 1.0596 |
| 0.1761 | 2.95 | 16300 | 0.1835 | 1.0394 |
| 0.1761 | 2.97 | 16400 | 0.1852 | 1.0445 |
| 0.1754 | 2.98 | 16500 | 0.1847 | 1.0390 |
| 0.1754 | 3.0 | 16600 | 0.1828 | 1.0440 |
| 0.1754 | 3.02 | 16700 | 0.1869 | 1.0560 |
| 0.1754 | 3.04 | 16800 | 0.1882 | 1.0573 |
| 0.1754 | 3.06 | 16900 | 0.1912 | 1.0600 |
| 0.1592 | 3.08 | 17000 | 0.1921 | 1.0529 |
| 0.1592 | 3.09 | 17100 | 0.1881 | 1.0175 |
| 0.1592 | 3.11 | 17200 | 0.1891 | 1.0654 |
| 0.1592 | 3.13 | 17300 | 0.1889 | 1.0687 |
| 0.1592 | 3.15 | 17400 | 0.1916 | 1.0642 |
| 0.1556 | 3.17 | 17500 | 0.1850 | 1.0295 |
| 0.1556 | 3.18 | 17600 | 0.1875 | 1.0273 |
| 0.1556 | 3.2 | 17700 | 0.1894 | 1.0051 |
| 0.1556 | 3.22 | 17800 | 0.1870 | 1.0462 |
| 0.1556 | 3.24 | 17900 | 0.1831 | 1.0308 |
| 0.1557 | 3.26 | 18000 | 0.1878 | 1.0603 |
| 0.1557 | 3.27 | 18100 | 0.1850 | 1.0566 |
| 0.1557 | 3.29 | 18200 | 0.1843 | 1.0629 |
| 0.1557 | 3.31 | 18300 | 0.1886 | 1.0378 |
| 0.1557 | 3.33 | 18400 | 0.1892 | 1.0381 |
| 0.159 | 3.35 | 18500 | 0.1942 | 1.0519 |
| 0.159 | 3.36 | 18600 | 0.1829 | 1.0622 |
| 0.159 | 3.38 | 18700 | 0.1894 | 1.0557 |
| 0.159 | 3.4 | 18800 | 0.1895 | 1.0627 |
| 0.159 | 3.42 | 18900 | 0.1863 | 1.0362 |
| 0.1582 | 3.44 | 19000 | 0.1888 | 1.0491 |
| 0.1582 | 3.46 | 19100 | 0.1854 | 1.0483 |
| 0.1582 | 3.47 | 19200 | 0.1797 | 0.9787 |
| 0.1582 | 3.49 | 19300 | 0.1785 | 1.0086 |
| 0.1582 | 3.51 | 19400 | 0.1797 | 0.9915 |
| 0.1507 | 3.53 | 19500 | 0.1873 | 1.0266 |
| 0.1507 | 3.55 | 19600 | 0.1838 | 1.0299 |
| 0.1507 | 3.56 | 19700 | 0.1817 | 1.0355 |
| 0.1507 | 3.58 | 19800 | 0.1819 | 1.0271 |
| 0.1507 | 3.6 | 19900 | 0.1883 | 1.0248 |
| 0.1601 | 3.62 | 20000 | 0.1823 | 1.0406 |
| 0.1601 | 3.64 | 20100 | 0.1801 | 1.0261 |
| 0.1601 | 3.65 | 20200 | 0.1783 | 1.0329 |
| 0.1601 | 3.67 | 20300 | 0.1857 | 1.0162 |
| 0.1601 | 3.69 | 20400 | 0.1814 | 1.0212 |
| 0.1552 | 3.71 | 20500 | 0.1837 | 1.0232 |
| 0.1552 | 3.73 | 20600 | 0.1843 | 1.0314 |
| 0.1552 | 3.74 | 20700 | 0.1842 | 1.0258 |
| 0.1552 | 3.76 | 20800 | 0.1821 | 1.0479 |
| 0.1552 | 3.78 | 20900 | 0.1864 | 1.0459 |
| 0.1576 | 3.8 | 21000 | 0.1831 | 1.0364 |
| 0.1576 | 3.82 | 21100 | 0.1852 | 1.0271 |
| 0.1576 | 3.83 | 21200 | 0.1865 | 1.0204 |
| 0.1576 | 3.85 | 21300 | 0.1794 | 1.0324 |
| 0.1576 | 3.87 | 21400 | 0.1826 | 1.0315 |
| 0.1585 | 3.89 | 21500 | 0.1824 | 1.0327 |
| 0.1585 | 3.91 | 21600 | 0.1838 | 1.0208 |
| 0.1585 | 3.93 | 21700 | 0.1850 | 1.0199 |
| 0.1585 | 3.94 | 21800 | 0.1841 | 1.0050 |
| 0.1585 | 3.96 | 21900 | 0.1783 | 1.0003 |
| 0.1572 | 3.98 | 22000 | 0.1787 | 1.0115 |
| 0.1572 | 4.0 | 22100 | 0.1810 | 1.0235 |
| 0.1572 | 4.02 | 22200 | 0.1763 | 1.0191 |
| 0.1572 | 4.03 | 22300 | 0.1764 | 1.0332 |
| 0.1572 | 4.05 | 22400 | 0.1794 | 1.0429 |
| 0.1406 | 4.07 | 22500 | 0.1905 | 1.0288 |
| 0.1406 | 4.09 | 22600 | 0.1776 | 1.0244 |
| 0.1406 | 4.11 | 22700 | 0.1782 | 1.0451 |
| 0.1406 | 4.12 | 22800 | 0.1771 | 1.0387 |
| 0.1406 | 4.14 | 22900 | 0.1788 | 1.0435 |
| 0.14 | 4.16 | 23000 | 0.1792 | 1.0421 |
| 0.14 | 4.18 | 23100 | 0.1841 | 1.0241 |
| 0.14 | 4.2 | 23200 | 0.1769 | 1.0546 |
| 0.14 | 4.21 | 23300 | 0.1815 | 1.0602 |
| 0.14 | 4.23 | 23400 | 0.1784 | 1.0369 |
| 0.1394 | 4.25 | 23500 | 0.1809 | 1.0406 |
| 0.1394 | 4.27 | 23600 | 0.1744 | 1.0133 |
| 0.1394 | 4.29 | 23700 | 0.1771 | 1.0214 |
| 0.1394 | 4.31 | 23800 | 0.1765 | 1.0064 |
| 0.1394 | 4.32 | 23900 | 0.1793 | 1.0200 |
| 0.14 | 4.34 | 24000 | 0.1776 | 1.0352 |
| 0.14 | 4.36 | 24100 | 0.1775 | 1.0294 |
| 0.14 | 4.38 | 24200 | 0.1763 | 1.0213 |
| 0.14 | 4.4 | 24300 | 0.1697 | 1.0302 |
| 0.14 | 4.41 | 24400 | 0.1771 | 1.0259 |
| 0.1408 | 4.43 | 24500 | 0.1747 | 1.0409 |
| 0.1408 | 4.45 | 24600 | 0.1769 | 1.0278 |
| 0.1408 | 4.47 | 24700 | 0.1767 | 1.0190 |
| 0.1408 | 4.49 | 24800 | 0.1745 | 1.0281 |
| 0.1408 | 4.5 | 24900 | 0.1738 | 1.0356 |
| 0.1391 | 4.52 | 25000 | 0.1781 | 1.0429 |
| 0.1391 | 4.54 | 25100 | 0.1784 | 1.0076 |
| 0.1391 | 4.56 | 25200 | 0.1771 | 1.0157 |
| 0.1391 | 4.58 | 25300 | 0.1758 | 1.0337 |
| 0.1391 | 4.59 | 25400 | 0.1758 | 1.0466 |
| 0.1398 | 4.61 | 25500 | 0.1724 | 1.0403 |
| 0.1398 | 4.63 | 25600 | 0.1765 | 1.0481 |
| 0.1398 | 4.65 | 25700 | 0.1757 | 1.0320 |
| 0.1398 | 4.67 | 25800 | 0.1814 | 1.0479 |
| 0.1398 | 4.69 | 25900 | 0.1713 | 1.0251 |
| 0.1427 | 4.7 | 26000 | 0.1735 | 1.0340 |
| 0.1427 | 4.72 | 26100 | 0.1765 | 1.0358 |
| 0.1427 | 4.74 | 26200 | 0.1731 | 1.0220 |
| 0.1427 | 4.76 | 26300 | 0.1769 | 1.0261 |
| 0.1427 | 4.78 | 26400 | 0.1747 | 1.0139 |
| 0.1424 | 4.79 | 26500 | 0.1791 | 1.0406 |
| 0.1424 | 4.81 | 26600 | 0.1735 | 1.0497 |
| 0.1424 | 4.83 | 26700 | 0.1710 | 1.0433 |
| 0.1424 | 4.85 | 26800 | 0.1771 | 1.0002 |
| 0.1424 | 4.87 | 26900 | 0.1748 | 1.0046 |
| 0.1419 | 4.88 | 27000 | 0.1794 | 1.0332 |
| 0.1419 | 4.9 | 27100 | 0.1772 | 1.0558 |
| 0.1419 | 4.92 | 27200 | 0.1757 | 1.0477 |
| 0.1419 | 4.94 | 27300 | 0.1735 | 1.0324 |
| 0.1419 | 4.96 | 27400 | 0.1758 | 1.0260 |
| 0.1433 | 4.97 | 27500 | 0.1767 | 1.0422 |
| 0.1433 | 4.99 | 27600 | 0.1695 | 1.0386 |
| 0.1433 | 5.01 | 27700 | 0.1763 | 1.0571 |
| 0.1433 | 5.03 | 27800 | 0.1743 | 1.0367 |
| 0.1433 | 5.05 | 27900 | 0.1804 | 1.0255 |
| 0.1306 | 5.07 | 28000 | 0.1803 | 1.0377 |
| 0.1306 | 5.08 | 28100 | 0.1750 | 1.0552 |
| 0.1306 | 5.1 | 28200 | 0.1743 | 1.0512 |
| 0.1306 | 5.12 | 28300 | 0.1777 | 1.0584 |
| 0.1306 | 5.14 | 28400 | 0.1726 | 1.0374 |
| 0.123 | 5.16 | 28500 | 0.1776 | 1.0439 |
| 0.123 | 5.17 | 28600 | 0.1759 | 1.0682 |
| 0.123 | 5.19 | 28700 | 0.1724 | 1.0511 |
| 0.123 | 5.21 | 28800 | 0.1677 | 1.0560 |
| 0.123 | 5.23 | 28900 | 0.1699 | 1.0421 |
| 0.1217 | 5.25 | 29000 | 0.1803 | 1.0370 |
| 0.1217 | 5.26 | 29100 | 0.1770 | 1.0474 |
| 0.1217 | 5.28 | 29200 | 0.1733 | 1.0332 |
| 0.1217 | 5.3 | 29300 | 0.1746 | 1.0158 |
| 0.1217 | 5.32 | 29400 | 0.1763 | 1.0341 |
| 0.1246 | 5.34 | 29500 | 0.1775 | 1.0348 |
| 0.1246 | 5.35 | 29600 | 0.1730 | 1.0492 |
| 0.1246 | 5.37 | 29700 | 0.1730 | 1.0503 |
| 0.1246 | 5.39 | 29800 | 0.1727 | 1.0437 |
| 0.1246 | 5.41 | 29900 | 0.1744 | 1.0539 |
| 0.127 | 5.43 | 30000 | 0.1748 | 1.0463 |
| 0.127 | 5.44 | 30100 | 0.1746 | 1.0555 |
| 0.127 | 5.46 | 30200 | 0.1810 | 1.0558 |
| 0.127 | 5.48 | 30300 | 0.1773 | 1.0407 |
| 0.127 | 5.5 | 30400 | 0.1722 | 1.0489 |
| 0.1276 | 5.52 | 30500 | 0.1720 | 1.0520 |
| 0.1276 | 5.54 | 30600 | 0.1777 | 1.0347 |
| 0.1276 | 5.55 | 30700 | 0.1685 | 1.0347 |
| 0.1276 | 5.57 | 30800 | 0.1659 | 1.0338 |
| 0.1276 | 5.59 | 30900 | 0.1756 | 1.0228 |
| 0.1246 | 5.61 | 31000 | 0.1717 | 1.0409 |
| 0.1246 | 5.63 | 31100 | 0.1764 | 1.0202 |
| 0.1246 | 5.64 | 31200 | 0.1693 | 1.0314 |
| 0.1246 | 5.66 | 31300 | 0.1731 | 1.0319 |
| 0.1246 | 5.68 | 31400 | 0.1688 | 1.0380 |
| 0.1271 | 5.7 | 31500 | 0.1671 | 1.0350 |
| 0.1271 | 5.72 | 31600 | 0.1676 | 1.0430 |
| 0.1271 | 5.73 | 31700 | 0.1656 | 1.0441 |
| 0.1271 | 5.75 | 31800 | 0.1664 | 1.0403 |
| 0.1271 | 5.77 | 31900 | 0.1691 | 1.0152 |
| 0.1259 | 5.79 | 32000 | 0.1702 | 1.0018 |
| 0.1259 | 5.81 | 32100 | 0.1664 | 1.0246 |
| 0.1259 | 5.82 | 32200 | 0.1737 | 1.0340 |
| 0.1259 | 5.84 | 32300 | 0.1742 | 1.0449 |
| 0.1259 | 5.86 | 32400 | 0.1707 | 1.0279 |
| 0.1273 | 5.88 | 32500 | 0.1697 | 1.0471 |
| 0.1273 | 5.9 | 32600 | 0.1668 | 1.0322 |
| 0.1273 | 5.92 | 32700 | 0.1706 | 1.0378 |
| 0.1273 | 5.93 | 32800 | 0.1704 | 1.0350 |
| 0.1273 | 5.95 | 32900 | 0.1725 | 1.0244 |
| 0.123 | 5.97 | 33000 | 0.1678 | 1.0447 |
| 0.123 | 5.99 | 33100 | 0.1681 | 1.0438 |
| 0.123 | 6.01 | 33200 | 0.1689 | 1.0297 |
| 0.123 | 6.02 | 33300 | 0.1690 | 1.0333 |
| 0.123 | 6.04 | 33400 | 0.1734 | 1.0296 |
| 0.1163 | 6.06 | 33500 | 0.1748 | 1.0307 |
| 0.1163 | 6.08 | 33600 | 0.1715 | 1.0123 |
| 0.1163 | 6.1 | 33700 | 0.1668 | 1.0117 |
| 0.1163 | 6.11 | 33800 | 0.1690 | 1.0230 |
| 0.1163 | 6.13 | 33900 | 0.1693 | 1.0166 |
| 0.1101 | 6.15 | 34000 | 0.1728 | 1.0162 |
| 0.1101 | 6.17 | 34100 | 0.1683 | 1.0107 |
| 0.1101 | 6.19 | 34200 | 0.1703 | 0.9814 |
| 0.1101 | 6.2 | 34300 | 0.1692 | 1.0007 |
| 0.1101 | 6.22 | 34400 | 0.1690 | 1.0000 |
| 0.1118 | 6.24 | 34500 | 0.1734 | 0.9972 |
| 0.1118 | 6.26 | 34600 | 0.1739 | 1.0096 |
| 0.1118 | 6.28 | 34700 | 0.1749 | 1.0047 |
| 0.1118 | 6.3 | 34800 | 0.1709 | 1.0111 |
| 0.1118 | 6.31 | 34900 | 0.1717 | 1.0179 |
| 0.1153 | 6.33 | 35000 | 0.1690 | 1.0155 |
| 0.1153 | 6.35 | 35100 | 0.1710 | 1.0144 |
| 0.1153 | 6.37 | 35200 | 0.1719 | 1.0030 |
| 0.1153 | 6.39 | 35300 | 0.1690 | 1.0272 |
| 0.1153 | 6.4 | 35400 | 0.1673 | 1.0103 |
| 0.1106 | 6.42 | 35500 | 0.1710 | 1.0222 |
| 0.1106 | 6.44 | 35600 | 0.1747 | 1.0173 |
| 0.1106 | 6.46 | 35700 | 0.1721 | 0.9933 |
| 0.1106 | 6.48 | 35800 | 0.1670 | 1.0184 |
| 0.1106 | 6.49 | 35900 | 0.1714 | 1.0122 |
| 0.1116 | 6.51 | 36000 | 0.1717 | 1.0035 |
| 0.1116 | 6.53 | 36100 | 0.1685 | 1.0099 |
| 0.1116 | 6.55 | 36200 | 0.1687 | 1.0288 |
| 0.1116 | 6.57 | 36300 | 0.1664 | 1.0314 |
| 0.1116 | 6.58 | 36400 | 0.1665 | 1.0264 |
| 0.1128 | 6.6 | 36500 | 0.1681 | 1.0420 |
| 0.1128 | 6.62 | 36600 | 0.1682 | 1.0409 |
| 0.1128 | 6.64 | 36700 | 0.1717 | 1.0271 |
| 0.1128 | 6.66 | 36800 | 0.1717 | 1.0166 |
| 0.1128 | 6.68 | 36900 | 0.1755 | 1.0175 |
| 0.1134 | 6.69 | 37000 | 0.1623 | 1.0185 |
| 0.1134 | 6.71 | 37100 | 0.1674 | 1.0302 |
| 0.1134 | 6.73 | 37200 | 0.1633 | 1.0325 |
| 0.1134 | 6.75 | 37300 | 0.1628 | 1.0228 |
| 0.1134 | 6.77 | 37400 | 0.1636 | 1.0243 |
| 0.1102 | 6.78 | 37500 | 0.1667 | 1.0282 |
| 0.1102 | 6.8 | 37600 | 0.1623 | 1.0212 |
| 0.1102 | 6.82 | 37700 | 0.1639 | 1.0140 |
| 0.1102 | 6.84 | 37800 | 0.1587 | 1.0258 |
| 0.1102 | 6.86 | 37900 | 0.1610 | 1.0087 |
| 0.1113 | 6.87 | 38000 | 0.1647 | 1.0199 |
| 0.1113 | 6.89 | 38100 | 0.1609 | 1.0054 |
| 0.1113 | 6.91 | 38200 | 0.1602 | 1.0145 |
| 0.1113 | 6.93 | 38300 | 0.1602 | 1.0144 |
| 0.1113 | 6.95 | 38400 | 0.1602 | 1.0375 |
| 0.1071 | 6.96 | 38500 | 0.1592 | 1.0259 |
| 0.1071 | 6.98 | 38600 | 0.1612 | 1.0236 |
| 0.1071 | 7.0 | 38700 | 0.1621 | 1.0277 |
| 0.1071 | 7.02 | 38800 | 0.1669 | 1.0367 |
| 0.1071 | 7.04 | 38900 | 0.1742 | 1.0484 |
| 0.1062 | 7.05 | 39000 | 0.1752 | 1.0302 |
| 0.1062 | 7.07 | 39100 | 0.1676 | 1.0244 |
| 0.1062 | 7.09 | 39200 | 0.1723 | 1.0300 |
| 0.1062 | 7.11 | 39300 | 0.1727 | 1.0294 |
| 0.1062 | 7.13 | 39400 | 0.1711 | 1.0255 |
| 0.1021 | 7.15 | 39500 | 0.1699 | 1.0471 |
| 0.1021 | 7.16 | 39600 | 0.1682 | 1.0426 |
| 0.1021 | 7.18 | 39700 | 0.1713 | 1.0233 |
| 0.1021 | 7.2 | 39800 | 0.1682 | 1.0259 |
| 0.1021 | 7.22 | 39900 | 0.1710 | 1.0162 |
| 0.103 | 7.24 | 40000 | 0.1725 | 1.0283 |
| 0.103 | 7.25 | 40100 | 0.1729 | 1.0264 |
| 0.103 | 7.27 | 40200 | 0.1665 | 1.0451 |
| 0.103 | 7.29 | 40300 | 0.1671 | 1.0386 |
| 0.103 | 7.31 | 40400 | 0.1671 | 1.0316 |
| 0.0981 | 7.33 | 40500 | 0.1708 | 1.0257 |
| 0.0981 | 7.34 | 40600 | 0.1642 | 1.0152 |
| 0.0981 | 7.36 | 40700 | 0.1707 | 1.0110 |
| 0.0981 | 7.38 | 40800 | 0.1675 | 1.0186 |
| 0.0981 | 7.4 | 40900 | 0.1702 | 1.0123 |
| 0.1005 | 7.42 | 41000 | 0.1699 | 1.0159 |
| 0.1005 | 7.43 | 41100 | 0.1703 | 1.0219 |
| 0.1005 | 7.45 | 41200 | 0.1707 | 1.0194 |
| 0.1005 | 7.47 | 41300 | 0.1644 | 1.0016 |
| 0.1005 | 7.49 | 41400 | 0.1716 | 0.9941 |
| 0.1021 | 7.51 | 41500 | 0.1670 | 1.0159 |
| 0.1021 | 7.53 | 41600 | 0.1667 | 1.0033 |
| 0.1021 | 7.54 | 41700 | 0.1667 | 1.0176 |
| 0.1021 | 7.56 | 41800 | 0.1679 | 1.0194 |
| 0.1021 | 7.58 | 41900 | 0.1632 | 1.0418 |
| 0.0963 | 7.6 | 42000 | 0.1712 | 1.0152 |
| 0.0963 | 7.62 | 42100 | 0.1632 | 1.0364 |
| 0.0963 | 7.63 | 42200 | 0.1702 | 1.0229 |
| 0.0963 | 7.65 | 42300 | 0.1655 | 1.0179 |
| 0.0963 | 7.67 | 42400 | 0.1698 | 1.0329 |
| 0.1014 | 7.69 | 42500 | 0.1691 | 1.0398 |
| 0.1014 | 7.71 | 42600 | 0.1638 | 1.0487 |
| 0.1014 | 7.72 | 42700 | 0.1617 | 1.0210 |
| 0.1014 | 7.74 | 42800 | 0.1648 | 1.0124 |
| 0.1014 | 7.76 | 42900 | 0.1608 | 1.0202 |
| 0.1008 | 7.78 | 43000 | 0.1611 | 1.0353 |
| 0.1008 | 7.8 | 43100 | 0.1633 | 1.0319 |
| 0.1008 | 7.81 | 43200 | 0.1640 | 1.0032 |
| 0.1008 | 7.83 | 43300 | 0.1589 | 0.9985 |
| 0.1008 | 7.85 | 43400 | 0.1630 | 0.9975 |
| 0.0988 | 7.87 | 43500 | 0.1604 | 1.0053 |
| 0.0988 | 7.89 | 43600 | 0.1687 | 1.0063 |
| 0.0988 | 7.91 | 43700 | 0.1619 | 1.0096 |
| 0.0988 | 7.92 | 43800 | 0.1565 | 0.9901 |
| 0.0988 | 7.94 | 43900 | 0.1619 | 0.9742 |
| 0.102 | 7.96 | 44000 | 0.1598 | 0.9593 |
| 0.102 | 7.98 | 44100 | 0.1635 | 0.9718 |
| 0.102 | 8.0 | 44200 | 0.1624 | 0.9903 |
| 0.102 | 8.01 | 44300 | 0.1605 | 0.9882 |
| 0.102 | 8.03 | 44400 | 0.1657 | 1.0128 |
| 0.0961 | 8.05 | 44500 | 0.1651 | 1.0155 |
| 0.0961 | 8.07 | 44600 | 0.1680 | 1.0194 |
| 0.0961 | 8.09 | 44700 | 0.1694 | 1.0112 |
| 0.0961 | 8.1 | 44800 | 0.1665 | 1.0073 |
| 0.0961 | 8.12 | 44900 | 0.1612 | 1.0200 |
| 0.0894 | 8.14 | 45000 | 0.1652 | 1.0337 |
| 0.0894 | 8.16 | 45100 | 0.1626 | 1.0086 |
| 0.0894 | 8.18 | 45200 | 0.1639 | 1.0083 |
| 0.0894 | 8.19 | 45300 | 0.1634 | 1.0223 |
| 0.0894 | 8.21 | 45400 | 0.1631 | 1.0339 |
| 0.0887 | 8.23 | 45500 | 0.1640 | 1.0311 |
| 0.0887 | 8.25 | 45600 | 0.1661 | 1.0264 |
| 0.0887 | 8.27 | 45700 | 0.1650 | 1.0315 |
| 0.0887 | 8.29 | 45800 | 0.1624 | 1.0390 |
| 0.0887 | 8.3 | 45900 | 0.1624 | 1.0350 |
| 0.0884 | 8.32 | 46000 | 0.1615 | 1.0318 |
| 0.0884 | 8.34 | 46100 | 0.1628 | 1.0410 |
| 0.0884 | 8.36 | 46200 | 0.1627 | 1.0429 |
| 0.0884 | 8.38 | 46300 | 0.1644 | 1.0320 |
| 0.0884 | 8.39 | 46400 | 0.1633 | 1.0177 |
| 0.0893 | 8.41 | 46500 | 0.1654 | 1.0189 |
| 0.0893 | 8.43 | 46600 | 0.1598 | 1.0154 |
| 0.0893 | 8.45 | 46700 | 0.1618 | 1.0250 |
| 0.0893 | 8.47 | 46800 | 0.1639 | 1.0402 |
| 0.0893 | 8.48 | 46900 | 0.1616 | 1.0336 |
| 0.0869 | 8.5 | 47000 | 0.1613 | 1.0296 |
| 0.0869 | 8.52 | 47100 | 0.1648 | 1.0568 |
| 0.0869 | 8.54 | 47200 | 0.1625 | 1.0256 |
| 0.0869 | 8.56 | 47300 | 0.1609 | 1.0390 |
| 0.0869 | 8.57 | 47400 | 0.1606 | 1.0450 |
| 0.0894 | 8.59 | 47500 | 0.1605 | 1.0445 |
| 0.0894 | 8.61 | 47600 | 0.1660 | 1.0402 |
| 0.0894 | 8.63 | 47700 | 0.1618 | 1.0444 |
| 0.0894 | 8.65 | 47800 | 0.1669 | 1.0333 |
| 0.0894 | 8.66 | 47900 | 0.1627 | 1.0364 |
| 0.0885 | 8.68 | 48000 | 0.1616 | 1.0334 |
| 0.0885 | 8.7 | 48100 | 0.1626 | 1.0564 |
| 0.0885 | 8.72 | 48200 | 0.1624 | 1.0396 |
| 0.0885 | 8.74 | 48300 | 0.1623 | 1.0396 |
| 0.0885 | 8.76 | 48400 | 0.1612 | 1.0112 |
| 0.0888 | 8.77 | 48500 | 0.1638 | 1.0292 |
| 0.0888 | 8.79 | 48600 | 0.1639 | 0.9988 |
| 0.0888 | 8.81 | 48700 | 0.1618 | 1.0127 |
| 0.0888 | 8.83 | 48800 | 0.1584 | 1.0042 |
| 0.0888 | 8.85 | 48900 | 0.1615 | 1.0041 |
| 0.0887 | 8.86 | 49000 | 0.1637 | 1.0269 |
| 0.0887 | 8.88 | 49100 | 0.1627 | 0.9989 |
| 0.0887 | 8.9 | 49200 | 0.1583 | 1.0104 |
| 0.0887 | 8.92 | 49300 | 0.1600 | 1.0214 |
| 0.0887 | 8.94 | 49400 | 0.1599 | 1.0126 |
| 0.0893 | 8.95 | 49500 | 0.1595 | 1.0516 |
| 0.0893 | 8.97 | 49600 | 0.1625 | 1.0464 |
| 0.0893 | 8.99 | 49700 | 0.1595 | 1.0361 |
| 0.0893 | 9.01 | 49800 | 0.1614 | 1.0469 |
| 0.0893 | 9.03 | 49900 | 0.1612 | 1.0304 |
| 0.0834 | 9.04 | 50000 | 0.1643 | 1.0335 |
| 0.0834 | 9.06 | 50100 | 0.1640 | 1.0175 |
| 0.0834 | 9.08 | 50200 | 0.1655 | 1.0264 |
| 0.0834 | 9.1 | 50300 | 0.1678 | 1.0243 |
| 0.0834 | 9.12 | 50400 | 0.1659 | 1.0145 |
| 0.079 | 9.14 | 50500 | 0.1644 | 1.0316 |
| 0.079 | 9.15 | 50600 | 0.1630 | 1.0326 |
| 0.079 | 9.17 | 50700 | 0.1634 | 1.0154 |
| 0.079 | 9.19 | 50800 | 0.1697 | 1.0095 |
| 0.079 | 9.21 | 50900 | 0.1678 | 1.0050 |
| 0.078 | 9.23 | 51000 | 0.1626 | 1.0159 |
| 0.078 | 9.24 | 51100 | 0.1666 | 1.0238 |
| 0.078 | 9.26 | 51200 | 0.1644 | 1.0244 |
| 0.078 | 9.28 | 51300 | 0.1655 | 1.0345 |
| 0.078 | 9.3 | 51400 | 0.1615 | 1.0237 |
| 0.0776 | 9.32 | 51500 | 0.1664 | 1.0180 |
| 0.0776 | 9.33 | 51600 | 0.1603 | 1.0208 |
| 0.0776 | 9.35 | 51700 | 0.1594 | 1.0230 |
| 0.0776 | 9.37 | 51800 | 0.1622 | 1.0201 |
| 0.0776 | 9.39 | 51900 | 0.1596 | 1.0039 |
| 0.0782 | 9.41 | 52000 | 0.1645 | 1.0204 |
| 0.0782 | 9.42 | 52100 | 0.1640 | 1.0318 |
| 0.0782 | 9.44 | 52200 | 0.1621 | 1.0290 |
| 0.0782 | 9.46 | 52300 | 0.1638 | 1.0318 |
| 0.0782 | 9.48 | 52400 | 0.1613 | 1.0217 |
| 0.0782 | 9.5 | 52500 | 0.1609 | 1.0261 |
| 0.0782 | 9.52 | 52600 | 0.1625 | 1.0101 |
| 0.0782 | 9.53 | 52700 | 0.1613 | 1.0058 |
| 0.0782 | 9.55 | 52800 | 0.1599 | 1.0068 |
| 0.0782 | 9.57 | 52900 | 0.1600 | 1.0110 |
| 0.0797 | 9.59 | 53000 | 0.1594 | 1.0171 |
| 0.0797 | 9.61 | 53100 | 0.1583 | 1.0124 |
| 0.0797 | 9.62 | 53200 | 0.1646 | 1.0093 |
| 0.0797 | 9.64 | 53300 | 0.1580 | 1.0201 |
| 0.0797 | 9.66 | 53400 | 0.1599 | 1.0207 |
| 0.0783 | 9.68 | 53500 | 0.1577 | 1.0226 |
| 0.0783 | 9.7 | 53600 | 0.1593 | 1.0160 |
| 0.0783 | 9.71 | 53700 | 0.1570 | 1.0173 |
| 0.0783 | 9.73 | 53800 | 0.1614 | 1.0299 |
| 0.0783 | 9.75 | 53900 | 0.1610 | 1.0184 |
| 0.0779 | 9.77 | 54000 | 0.1606 | 1.0173 |
| 0.0779 | 9.79 | 54100 | 0.1577 | 1.0032 |
| 0.0779 | 9.8 | 54200 | 0.1590 | 1.0070 |
| 0.0779 | 9.82 | 54300 | 0.1580 | 1.0257 |
| 0.0779 | 9.84 | 54400 | 0.1592 | 1.0108 |
| 0.0778 | 9.86 | 54500 | 0.1617 | 0.9907 |
| 0.0778 | 9.88 | 54600 | 0.1605 | 1.0189 |
| 0.0778 | 9.89 | 54700 | 0.1605 | 1.0177 |
| 0.0778 | 9.91 | 54800 | 0.1536 | 1.0275 |
| 0.0778 | 9.93 | 54900 | 0.1658 | 1.0282 |
| 0.0777 | 9.95 | 55000 | 0.1543 | 1.0385 |
| 0.0777 | 9.97 | 55100 | 0.1559 | 1.0375 |
| 0.0777 | 9.99 | 55200 | 0.1590 | 1.0215 |
| 0.0777 | 10.0 | 55300 | 0.1624 | 1.0242 |
| 0.0777 | 10.02 | 55400 | 0.1635 | 1.0244 |
| 0.0712 | 10.04 | 55500 | 0.1629 | 1.0298 |
| 0.0712 | 10.06 | 55600 | 0.1601 | 1.0299 |
| 0.0712 | 10.08 | 55700 | 0.1625 | 1.0117 |
| 0.0712 | 10.09 | 55800 | 0.1650 | 1.0233 |
| 0.0712 | 10.11 | 55900 | 0.1631 | 1.0061 |
| 0.0667 | 10.13 | 56000 | 0.1637 | 1.0226 |
| 0.0667 | 10.15 | 56100 | 0.1607 | 1.0042 |
| 0.0667 | 10.17 | 56200 | 0.1599 | 1.0117 |
| 0.0667 | 10.18 | 56300 | 0.1623 | 1.0246 |
| 0.0667 | 10.2 | 56400 | 0.1639 | 1.0294 |
| 0.0695 | 10.22 | 56500 | 0.1650 | 1.0232 |
| 0.0695 | 10.24 | 56600 | 0.1620 | 1.0289 |
| 0.0695 | 10.26 | 56700 | 0.1667 | 1.0209 |
| 0.0695 | 10.27 | 56800 | 0.1580 | 1.0163 |
| 0.0695 | 10.29 | 56900 | 0.1646 | 1.0293 |
| 0.0686 | 10.31 | 57000 | 0.1636 | 1.0106 |
| 0.0686 | 10.33 | 57100 | 0.1586 | 1.0044 |
| 0.0686 | 10.35 | 57200 | 0.1582 | 1.0213 |
| 0.0686 | 10.37 | 57300 | 0.1627 | 1.0151 |
| 0.0686 | 10.38 | 57400 | 0.1619 | 1.0248 |
| 0.0686 | 10.4 | 57500 | 0.1596 | 1.0098 |
| 0.0686 | 10.42 | 57600 | 0.1606 | 1.0031 |
| 0.0686 | 10.44 | 57700 | 0.1620 | 1.0046 |
| 0.0686 | 10.46 | 57800 | 0.1592 | 1.0018 |
| 0.0686 | 10.47 | 57900 | 0.1592 | 1.0058 |
| 0.0669 | 10.49 | 58000 | 0.1605 | 0.9961 |
| 0.0669 | 10.51 | 58100 | 0.1632 | 1.0102 |
| 0.0669 | 10.53 | 58200 | 0.1593 | 1.0061 |
| 0.0669 | 10.55 | 58300 | 0.1586 | 1.0091 |
| 0.0669 | 10.56 | 58400 | 0.1603 | 1.0085 |
| 0.068 | 10.58 | 58500 | 0.1579 | 1.0031 |
| 0.068 | 10.6 | 58600 | 0.1591 | 1.0021 |
| 0.068 | 10.62 | 58700 | 0.1590 | 1.0163 |
| 0.068 | 10.64 | 58800 | 0.1584 | 1.0045 |
| 0.068 | 10.65 | 58900 | 0.1594 | 1.0158 |
| 0.0693 | 10.67 | 59000 | 0.1568 | 1.0052 |
| 0.0693 | 10.69 | 59100 | 0.1581 | 0.9955 |
| 0.0693 | 10.71 | 59200 | 0.1622 | 0.9917 |
| 0.0693 | 10.73 | 59300 | 0.1580 | 1.0018 |
| 0.0693 | 10.75 | 59400 | 0.1601 | 1.0077 |
| 0.0699 | 10.76 | 59500 | 0.1605 | 0.9997 |
| 0.0699 | 10.78 | 59600 | 0.1585 | 1.0009 |
| 0.0699 | 10.8 | 59700 | 0.1541 | 1.0058 |
| 0.0699 | 10.82 | 59800 | 0.1583 | 1.0026 |
| 0.0699 | 10.84 | 59900 | 0.1592 | 0.9992 |
| 0.0671 | 10.85 | 60000 | 0.1590 | 1.0004 |
| 0.0671 | 10.87 | 60100 | 0.1585 | 1.0060 |
| 0.0671 | 10.89 | 60200 | 0.1579 | 1.0063 |
| 0.0671 | 10.91 | 60300 | 0.1582 | 0.9949 |
| 0.0671 | 10.93 | 60400 | 0.1562 | 1.0004 |
| 0.0661 | 10.94 | 60500 | 0.1560 | 0.9950 |
| 0.0661 | 10.96 | 60600 | 0.1564 | 0.9990 |
| 0.0661 | 10.98 | 60700 | 0.1552 | 0.9982 |
| 0.0661 | 11.0 | 60800 | 0.1596 | 1.0018 |
| 0.0661 | 11.02 | 60900 | 0.1618 | 0.9905 |
| 0.0634 | 11.03 | 61000 | 0.1652 | 0.9890 |
| 0.0634 | 11.05 | 61100 | 0.1649 | 0.9886 |
| 0.0634 | 11.07 | 61200 | 0.1668 | 0.9870 |
| 0.0634 | 11.09 | 61300 | 0.1663 | 0.9921 |
| 0.0634 | 11.11 | 61400 | 0.1650 | 0.9919 |
| 0.0587 | 11.13 | 61500 | 0.1674 | 0.9831 |
| 0.0587 | 11.14 | 61600 | 0.1633 | 0.9793 |
| 0.0587 | 11.16 | 61700 | 0.1665 | 0.9781 |
| 0.0587 | 11.18 | 61800 | 0.1642 | 0.9821 |
| 0.0587 | 11.2 | 61900 | 0.1638 | 0.9797 |
| 0.0581 | 11.22 | 62000 | 0.1628 | 0.9727 |
| 0.0581 | 11.23 | 62100 | 0.1661 | 0.9796 |
| 0.0581 | 11.25 | 62200 | 0.1641 | 0.9830 |
| 0.0581 | 11.27 | 62300 | 0.1601 | 0.9867 |
| 0.0581 | 11.29 | 62400 | 0.1626 | 0.9757 |
| 0.0584 | 11.31 | 62500 | 0.1632 | 1.0014 |
| 0.0584 | 11.32 | 62600 | 0.1626 | 1.0052 |
| 0.0584 | 11.34 | 62700 | 0.1586 | 1.0098 |
| 0.0584 | 11.36 | 62800 | 0.1597 | 1.0151 |
| 0.0584 | 11.38 | 62900 | 0.1624 | 1.0054 |
| 0.0589 | 11.4 | 63000 | 0.1618 | 1.0018 |
| 0.0589 | 11.41 | 63100 | 0.1635 | 1.0032 |
| 0.0589 | 11.43 | 63200 | 0.1654 | 1.0142 |
| 0.0589 | 11.45 | 63300 | 0.1646 | 1.0031 |
| 0.0589 | 11.47 | 63400 | 0.1618 | 1.0118 |
| 0.0579 | 11.49 | 63500 | 0.1634 | 1.0218 |
| 0.0579 | 11.51 | 63600 | 0.1616 | 1.0179 |
| 0.0579 | 11.52 | 63700 | 0.1603 | 1.0036 |
| 0.0579 | 11.54 | 63800 | 0.1610 | 1.0150 |
| 0.0579 | 11.56 | 63900 | 0.1605 | 1.0285 |
| 0.0572 | 11.58 | 64000 | 0.1621 | 1.0261 |
| 0.0572 | 11.6 | 64100 | 0.1625 | 1.0252 |
| 0.0572 | 11.61 | 64200 | 0.1677 | 1.0257 |
| 0.0572 | 11.63 | 64300 | 0.1656 | 1.0243 |
| 0.0572 | 11.65 | 64400 | 0.1669 | 1.0270 |
| 0.0592 | 11.67 | 64500 | 0.1605 | 1.0305 |
| 0.0592 | 11.69 | 64600 | 0.1633 | 1.0277 |
| 0.0592 | 11.7 | 64700 | 0.1606 | 1.0176 |
| 0.0592 | 11.72 | 64800 | 0.1618 | 1.0249 |
| 0.0592 | 11.74 | 64900 | 0.1609 | 1.0113 |
| 0.0595 | 11.76 | 65000 | 0.1609 | 1.0254 |
| 0.0595 | 11.78 | 65100 | 0.1662 | 1.0275 |
| 0.0595 | 11.79 | 65200 | 0.1652 | 1.0164 |
| 0.0595 | 11.81 | 65300 | 0.1638 | 1.0266 |
| 0.0595 | 11.83 | 65400 | 0.1589 | 1.0274 |
| 0.0588 | 11.85 | 65500 | 0.1607 | 1.0136 |
| 0.0588 | 11.87 | 65600 | 0.1592 | 1.0136 |
| 0.0588 | 11.88 | 65700 | 0.1581 | 1.0183 |
| 0.0588 | 11.9 | 65800 | 0.1587 | 1.0133 |
| 0.0588 | 11.92 | 65900 | 0.1596 | 1.0170 |
| 0.0558 | 11.94 | 66000 | 0.1590 | 1.0161 |
| 0.0558 | 11.96 | 66100 | 0.1597 | 1.0193 |
| 0.0558 | 11.98 | 66200 | 0.1590 | 1.0193 |
| 0.0558 | 11.99 | 66300 | 0.1608 | 1.0242 |
| 0.0558 | 12.01 | 66400 | 0.1642 | 1.0231 |
| 0.0555 | 12.03 | 66500 | 0.1679 | 1.0168 |
| 0.0555 | 12.05 | 66600 | 0.1674 | 1.0083 |
| 0.0555 | 12.07 | 66700 | 0.1658 | 1.0069 |
| 0.0555 | 12.08 | 66800 | 0.1661 | 1.0134 |
| 0.0555 | 12.1 | 66900 | 0.1682 | 1.0274 |
| 0.0508 | 12.12 | 67000 | 0.1702 | 1.0219 |
| 0.0508 | 12.14 | 67100 | 0.1694 | 1.0219 |
| 0.0508 | 12.16 | 67200 | 0.1667 | 1.0236 |
| 0.0508 | 12.17 | 67300 | 0.1672 | 1.0253 |
| 0.0508 | 12.19 | 67400 | 0.1640 | 1.0215 |
| 0.0513 | 12.21 | 67500 | 0.1649 | 1.0242 |
| 0.0513 | 12.23 | 67600 | 0.1687 | 1.0262 |
| 0.0513 | 12.25 | 67700 | 0.1655 | 1.0231 |
| 0.0513 | 12.26 | 67800 | 0.1692 | 1.0176 |
| 0.0513 | 12.28 | 67900 | 0.1675 | 1.0202 |
| 0.0519 | 12.3 | 68000 | 0.1644 | 1.0241 |
| 0.0519 | 12.32 | 68100 | 0.1651 | 1.0297 |
| 0.0519 | 12.34 | 68200 | 0.1661 | 1.0287 |
| 0.0519 | 12.36 | 68300 | 0.1665 | 1.0257 |
| 0.0519 | 12.37 | 68400 | 0.1685 | 1.0233 |
| 0.0522 | 12.39 | 68500 | 0.1636 | 1.0177 |
| 0.0522 | 12.41 | 68600 | 0.1709 | 1.0200 |
| 0.0522 | 12.43 | 68700 | 0.1684 | 1.0164 |
| 0.0522 | 12.45 | 68800 | 0.1666 | 1.0119 |
| 0.0522 | 12.46 | 68900 | 0.1683 | 1.0136 |
| 0.05 | 12.48 | 69000 | 0.1696 | 1.0127 |
| 0.05 | 12.5 | 69100 | 0.1708 | 1.0184 |
| 0.05 | 12.52 | 69200 | 0.1654 | 1.0282 |
| 0.05 | 12.54 | 69300 | 0.1700 | 1.0235 |
| 0.05 | 12.55 | 69400 | 0.1688 | 1.0257 |
| 0.0513 | 12.57 | 69500 | 0.1646 | 1.0274 |
| 0.0513 | 12.59 | 69600 | 0.1660 | 1.0247 |
| 0.0513 | 12.61 | 69700 | 0.1657 | 1.0188 |
| 0.0513 | 12.63 | 69800 | 0.1654 | 1.0087 |
| 0.0513 | 12.64 | 69900 | 0.1681 | 1.0146 |
| 0.0512 | 12.66 | 70000 | 0.1660 | 1.0185 |
| 0.0512 | 12.68 | 70100 | 0.1690 | 1.0214 |
| 0.0512 | 12.7 | 70200 | 0.1683 | 1.0160 |
| 0.0512 | 12.72 | 70300 | 0.1695 | 1.0198 |
| 0.0512 | 12.74 | 70400 | 0.1666 | 1.0193 |
| 0.0484 | 12.75 | 70500 | 0.1654 | 1.0142 |
| 0.0484 | 12.77 | 70600 | 0.1598 | 1.0154 |
| 0.0484 | 12.79 | 70700 | 0.1623 | 1.0139 |
| 0.0484 | 12.81 | 70800 | 0.1662 | 1.0180 |
| 0.0484 | 12.83 | 70900 | 0.1659 | 1.0232 |
| 0.0501 | 12.84 | 71000 | 0.1662 | 1.0202 |
| 0.0501 | 12.86 | 71100 | 0.1639 | 1.0161 |
| 0.0501 | 12.88 | 71200 | 0.1666 | 1.0151 |
| 0.0501 | 12.9 | 71300 | 0.1644 | 1.0129 |
| 0.0501 | 12.92 | 71400 | 0.1642 | 1.0171 |
| 0.0482 | 12.93 | 71500 | 0.1635 | 1.0162 |
| 0.0482 | 12.95 | 71600 | 0.1637 | 1.0186 |
| 0.0482 | 12.97 | 71700 | 0.1639 | 1.0142 |
| 0.0482 | 12.99 | 71800 | 0.1643 | 1.0122 |
| 0.0482 | 13.01 | 71900 | 0.1679 | 1.0156 |
| 0.0483 | 13.02 | 72000 | 0.1717 | 1.0224 |
| 0.0483 | 13.04 | 72100 | 0.1742 | 1.0229 |
| 0.0483 | 13.06 | 72200 | 0.1718 | 1.0237 |
| 0.0483 | 13.08 | 72300 | 0.1742 | 1.0266 |
| 0.0483 | 13.1 | 72400 | 0.1736 | 1.0257 |
| 0.0443 | 13.12 | 72500 | 0.1741 | 1.0275 |
| 0.0443 | 13.13 | 72600 | 0.1745 | 1.0325 |
| 0.0443 | 13.15 | 72700 | 0.1737 | 1.0296 |
| 0.0443 | 13.17 | 72800 | 0.1722 | 1.0303 |
| 0.0443 | 13.19 | 72900 | 0.1702 | 1.0305 |
| 0.0424 | 13.21 | 73000 | 0.1733 | 1.0241 |
| 0.0424 | 13.22 | 73100 | 0.1748 | 1.0243 |
| 0.0424 | 13.24 | 73200 | 0.1760 | 1.0231 |
| 0.0424 | 13.26 | 73300 | 0.1745 | 1.0241 |
| 0.0424 | 13.28 | 73400 | 0.1772 | 1.0217 |
| 0.0424 | 13.3 | 73500 | 0.1755 | 1.0206 |
| 0.0424 | 13.31 | 73600 | 0.1743 | 1.0242 |
| 0.0424 | 13.33 | 73700 | 0.1738 | 1.0208 |
| 0.0424 | 13.35 | 73800 | 0.1736 | 1.0249 |
| 0.0424 | 13.37 | 73900 | 0.1747 | 1.0271 |
| 0.0437 | 13.39 | 74000 | 0.1707 | 1.0241 |
| 0.0437 | 13.4 | 74100 | 0.1731 | 1.0269 |
| 0.0437 | 13.42 | 74200 | 0.1743 | 1.0290 |
| 0.0437 | 13.44 | 74300 | 0.1739 | 1.0266 |
| 0.0437 | 13.46 | 74400 | 0.1763 | 1.0246 |
| 0.0443 | 13.48 | 74500 | 0.1724 | 1.0209 |
| 0.0443 | 13.49 | 74600 | 0.1744 | 1.0244 |
| 0.0443 | 13.51 | 74700 | 0.1717 | 1.0232 |
| 0.0443 | 13.53 | 74800 | 0.1754 | 1.0217 |
| 0.0443 | 13.55 | 74900 | 0.1721 | 1.0234 |
| 0.0435 | 13.57 | 75000 | 0.1751 | 1.0197 |
| 0.0435 | 13.59 | 75100 | 0.1727 | 1.0285 |
| 0.0435 | 13.6 | 75200 | 0.1715 | 1.0221 |
| 0.0435 | 13.62 | 75300 | 0.1746 | 1.0247 |
| 0.0435 | 13.64 | 75400 | 0.1712 | 1.0231 |
| 0.0436 | 13.66 | 75500 | 0.1719 | 1.0228 |
| 0.0436 | 13.68 | 75600 | 0.1727 | 1.0197 |
| 0.0436 | 13.69 | 75700 | 0.1750 | 1.0252 |
| 0.0436 | 13.71 | 75800 | 0.1702 | 1.0241 |
| 0.0436 | 13.73 | 75900 | 0.1720 | 1.0250 |
| 0.0433 | 13.75 | 76000 | 0.1744 | 1.0210 |
| 0.0433 | 13.77 | 76100 | 0.1735 | 1.0211 |
| 0.0433 | 13.78 | 76200 | 0.1727 | 1.0205 |
| 0.0433 | 13.8 | 76300 | 0.1706 | 1.0218 |
| 0.0433 | 13.82 | 76400 | 0.1709 | 1.0238 |
| 0.0431 | 13.84 | 76500 | 0.1705 | 1.0197 |
| 0.0431 | 13.86 | 76600 | 0.1734 | 1.0223 |
| 0.0431 | 13.87 | 76700 | 0.1695 | 1.0250 |
| 0.0431 | 13.89 | 76800 | 0.1734 | 1.0232 |
| 0.0431 | 13.91 | 76900 | 0.1724 | 1.0219 |
| 0.041 | 13.93 | 77000 | 0.1706 | 1.0236 |
| 0.041 | 13.95 | 77100 | 0.1689 | 1.0220 |
| 0.041 | 13.97 | 77200 | 0.1738 | 1.0230 |
| 0.041 | 13.98 | 77300 | 0.1727 | 1.0254 |
| 0.041 | 14.0 | 77400 | 0.1721 | 1.0261 |
| 0.041 | 14.02 | 77500 | 0.1760 | 1.0261 |
| 0.041 | 14.04 | 77600 | 0.1772 | 1.0202 |
| 0.041 | 14.06 | 77700 | 0.1782 | 1.0202 |
| 0.041 | 14.07 | 77800 | 0.1777 | 1.0222 |
| 0.041 | 14.09 | 77900 | 0.1787 | 1.0203 |
| 0.0383 | 14.11 | 78000 | 0.1790 | 1.0236 |
| 0.0383 | 14.13 | 78100 | 0.1812 | 1.0245 |
| 0.0383 | 14.15 | 78200 | 0.1778 | 1.0224 |
| 0.0383 | 14.16 | 78300 | 0.1771 | 1.0231 |
| 0.0383 | 14.18 | 78400 | 0.1782 | 1.0242 |
| 0.0391 | 14.2 | 78500 | 0.1785 | 1.0262 |
| 0.0391 | 14.22 | 78600 | 0.1791 | 1.0261 |
| 0.0391 | 14.24 | 78700 | 0.1770 | 1.0254 |
| 0.0391 | 14.25 | 78800 | 0.1810 | 1.0257 |
| 0.0391 | 14.27 | 78900 | 0.1794 | 1.0241 |
| 0.0387 | 14.29 | 79000 | 0.1774 | 1.0256 |
| 0.0387 | 14.31 | 79100 | 0.1774 | 1.0236 |
| 0.0387 | 14.33 | 79200 | 0.1759 | 1.0222 |
| 0.0387 | 14.35 | 79300 | 0.1787 | 1.0237 |
| 0.0387 | 14.36 | 79400 | 0.1788 | 1.0227 |
| 0.0372 | 14.38 | 79500 | 0.1789 | 1.0232 |
| 0.0372 | 14.4 | 79600 | 0.1771 | 1.0254 |
| 0.0372 | 14.42 | 79700 | 0.1777 | 1.0244 |
| 0.0372 | 14.44 | 79800 | 0.1791 | 1.0225 |
| 0.0372 | 14.45 | 79900 | 0.1786 | 1.0237 |
| 0.0385 | 14.47 | 80000 | 0.1782 | 1.0243 |
| 0.0385 | 14.49 | 80100 | 0.1770 | 1.0236 |
| 0.0385 | 14.51 | 80200 | 0.1782 | 1.0240 |
| 0.0385 | 14.53 | 80300 | 0.1764 | 1.0243 |
| 0.0385 | 14.54 | 80400 | 0.1748 | 1.0248 |
| 0.039 | 14.56 | 80500 | 0.1758 | 1.0232 |
| 0.039 | 14.58 | 80600 | 0.1763 | 1.0246 |
| 0.039 | 14.6 | 80700 | 0.1770 | 1.0220 |
| 0.039 | 14.62 | 80800 | 0.1788 | 1.0225 |
| 0.039 | 14.63 | 80900 | 0.1781 | 1.0230 |
| 0.039 | 14.65 | 81000 | 0.1779 | 1.0230 |
| 0.039 | 14.67 | 81100 | 0.1755 | 1.0212 |
| 0.039 | 14.69 | 81200 | 0.1765 | 1.0226 |
| 0.039 | 14.71 | 81300 | 0.1787 | 1.0241 |
| 0.039 | 14.72 | 81400 | 0.1782 | 1.0250 |
| 0.0368 | 14.74 | 81500 | 0.1780 | 1.0248 |
| 0.0368 | 14.76 | 81600 | 0.1782 | 1.0242 |
| 0.0368 | 14.78 | 81700 | 0.1782 | 1.0242 |
| 0.0368 | 14.8 | 81800 | 0.1792 | 1.0241 |
| 0.0368 | 14.82 | 81900 | 0.1796 | 1.0238 |
| 0.0378 | 14.83 | 82000 | 0.1795 | 1.0236 |
| 0.0378 | 14.85 | 82100 | 0.1796 | 1.0239 |
| 0.0378 | 14.87 | 82200 | 0.1792 | 1.0236 |
| 0.0378 | 14.89 | 82300 | 0.1789 | 1.0239 |
| 0.0378 | 14.91 | 82400 | 0.1788 | 1.0238 |
| 0.0386 | 14.92 | 82500 | 0.1787 | 1.0239 |
| 0.0386 | 14.94 | 82600 | 0.1786 | 1.0236 |
| 0.0386 | 14.96 | 82700 | 0.1786 | 1.0237 |
| 0.0386 | 14.98 | 82800 | 0.1787 | 1.0239 |
| 0.0386 | 15.0 | 82900 | 0.1788 | 1.0238 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
CLAck/en-km | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"translation",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- conversational
- tagalog
- filipino
language:
- tl
---
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 10% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
CLAck/en-vi | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- conversational
- tagalog
- filipino
inference: false
language:
- tl
---
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 20% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
CLAck/indo-mixed | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- conversational
- tagalog
- filipino
inference: false
language:
- tl
---
# Tagalog DialoGPT
This is an extension of the base Tagalog DialoGPT model (https://huggingface.co/gabtan99/dialogpt-tagalog-medium).
This model is trained on 52K original conversations and 52K synthetic conversations, where 30% of tokens in each utterance in the synthetic conversation are machine-generated tokens.
|
CLAck/indo-pure | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- conversational
- tagalog
- filipino
language:
- tl
inference: false
datasets:
- gabtan99/pex-conversations
---
# Tagalog DialoGPT
A DialoGPT-medium model fine-tuned on Tagalog conversational data scraped from the web. This model is an output of a research on RoBERTa-based data augmentation for low resource languages. This is the baseline model which did not use any synthetic data in training.
# Latest release: July 25, 2021
* The model is currently only able to respond based on the history of 3 previous utterances before being limited. This is a result of the scarce amount of Tagalog conversations in our dataset.
# Dataset
[PEx Conversations Dataset](https://huggingface.co/datasets/gabtan99/pex-conversations)
# Usage
Here is an example of using beam search for model inference.
```
for step in range(2):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# we limit the generation to 512 tokens, each utterance in training had a maximum of 128 tokens
chat_history_ids = model.generate(
bot_input_ids, max_length=512,
pad_token_id=tokenizer.eos_token_id,
num_beams=5,
no_repeat_ngram_size=3
)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
# Training Script
[Fine-tuning script adapted from Spanish DialoGPT](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb)
# Research by
* [tyadrianpaule](https://huggingface.co/tyadrianpaule)
* [schuylerng](https://huggingface.co/schuylerng)
* [dcl127](https://huggingface.co/dcl127) |
CLAck/vi-en | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | I am adding my first README in order to test the interface. How good is it really? |
CLEE/CLEE | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
``` |
CLTL/MedRoBERTa.nl | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,988 | null | ---
license: apache-2.0
---
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
CLTL/gm-ner-xlmrbase | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
---
This model is used in the paper **Generative Relation Linking for Question Answering over Knowledge Bases**. [ArXiv](https://arxiv.org/abs/2108.07337), [GitHub](https://github.com/IBM/kbqa-relation-linking)
## Citation
```bibtex
@inproceedings{rossiello-genrl-2021,
title={Generative relation linking for question answering over knowledge bases},
author={Rossiello, Gaetano and Mihindukulasooriya, Nandana and Abdelaziz, Ibrahim and Bornea, Mihaela and Gliozzo, Alfio and Naseem, Tahira and Kapanipathi, Pavan},
booktitle={International Semantic Web Conference},
pages={321--337},
year={2021},
organization={Springer},
url = "https://link.springer.com/chapter/10.1007/978-3-030-88361-4_19",
doi = "10.1007/978-3-030-88361-4_19"
}
```
|
Cameron/BERT-SBIC-offensive | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | # Leetcode using AI :robot:
GPT-2 Model for Leetcode Questions in python
**Note**: the Answers might not make sense in some cases because of the bias in GPT-2
**Contribtuions:** If you would like to make the model better contributions are welcome Check out [CONTRIBUTIONS.md](https://github.com/gagan3012/project-code-py/blob/master/CONTRIBUTIONS.md)
### 📢 Favour:
It would be highly motivating, if you can STAR⭐ this repo if you find it helpful.
## Model
Two models have been developed for different use cases and they can be found at https://huggingface.co/gagan3012
The model weights can be found here: [GPT-2](https://huggingface.co/gagan3012/project-code-py) and [DistilGPT-2](https://huggingface.co/gagan3012/project-code-py-small)
### Example usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/project-code-py")
model = AutoModelWithLMHead.from_pretrained("gagan3012/project-code-py")
```
## Demo
[](https://share.streamlit.io/gagan3012/project-code-py/app.py)
A streamlit webapp has been setup to use the model: https://share.streamlit.io/gagan3012/project-code-py/app.py

## Example results:
### Question:
```
Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
```
### Answer:
```python
""" Write a function to delete a node in a singly-linked list. You will not be given access to the head of the list, instead you will be given access to the node to be deleted directly. It is guaranteed that the node to be deleted is not a tail node in the list.
For example,
a = 1->2->3
b = 3->1->2
t = ListNode(-1, 1)
Note: The lexicographic ordering of the nodes in a tree matters. Do not assign values to nodes in a tree.
Example 1:
Input: [1,2,3]
Output: 1->2->5
Explanation: 1->2->3->3->4, then 1->2->5[2] and then 5->1->3->4.
Note:
The length of a linked list will be in the range [1, 1000].
Node.val must be a valid LinkedListNode type.
Both the length and the value of the nodes in a linked list will be in the range [-1000, 1000].
All nodes are distinct.
"""
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def deleteNode(self, head: ListNode, val: int) -> None:
"""
BFS
Linked List
:param head: ListNode
:param val: int
:return: ListNode
"""
if head is not None:
return head
dummy = ListNode(-1, 1)
dummy.next = head
dummy.next.val = val
dummy.next.next = head
dummy.val = ""
s1 = Solution()
print(s1.deleteNode(head))
print(s1.deleteNode(-1))
print(s1.deleteNode(-1))
```
|
Capreolus/bert-base-msmarco | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 238 | null | ---
language:
- hi-en
tags:
- sentiment
- multilingual
- hindi codemix
- hinglish
license: apache-2.0
datasets:
- sail
---
# Sentiment Classification for hinglish text: `gk-hinglish-sentiment`
## Model description
Trained small amount of reviews dataset
## Intended uses & limitations
I wanted something to work well with hinglish data as it is being used in India mostly.
The training data was not much as expected
#### How to use
```python
#sample code
from transformers import BertTokenizer, BertForSequenceClassification
tokenizerg = BertTokenizer.from_pretrained("/content/model")
modelg = BertForSequenceClassification.from_pretrained("/content/model")
text = "kuch bhi type karo hinglish mai"
encoded_input = tokenizerg(text, return_tensors='pt')
output = modelg(**encoded_input)
print(output)
#output contains 3 lables LABEL_0 = Negative ,LABEL_1 = Nuetral ,LABEL_2 = Positive
```
#### Limitations and bias
The data contains only hinglish codemixed text it and was very much limited may be I will Update this model if I can get good amount of data
## Training data
Training data contains labeled data for 3 labels
link to the pre-trained model card with description of the pre-training data.
I have Tuned below model
https://huggingface.co/rohanrajpal/bert-base-multilingual-codemixed-cased-sentiment
### BibTeX entry and citation info
```@inproceedings{khanuja-etal-2020-gluecos,
title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
author = "Khanuja, Simran and
Dandapat, Sandipan and
Srinivasan, Anirudh and
Sitaram, Sunayana and
Choudhury, Monojit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.329",
pages = "3575--3585"
}
```
|
dccuchile/albert-base-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# gaussfer/test_simcse_new
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gaussfer/test_simcse_new')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gaussfer/test_simcse_new')
model = AutoModel.from_pretrained('gaussfer/test_simcse_new')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gaussfer/test_simcse_new)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 875 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 40,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dccuchile/albert-large-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5363
- Rouge2 Precision: 0.3459
- Rouge2 Recall: 0.2455
- Rouge2 Fmeasure: 0.2731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.652 | 1.0 | 1125 | 1.5087 | 0.3647 | 0.2425 | 0.2772 |
| 1.4695 | 2.0 | 2250 | 1.5039 | 0.3448 | 0.2457 | 0.2732 |
| 1.3714 | 3.0 | 3375 | 1.4842 | 0.3509 | 0.2474 | 0.277 |
| 1.2734 | 4.0 | 4500 | 1.4901 | 0.3452 | 0.2426 | 0.2716 |
| 1.1853 | 5.0 | 5625 | 1.5152 | 0.3658 | 0.2371 | 0.2744 |
| 1.0975 | 6.0 | 6750 | 1.5133 | 0.3529 | 0.2417 | 0.2729 |
| 1.0448 | 7.0 | 7875 | 1.5203 | 0.3485 | 0.2464 | 0.275 |
| 0.9999 | 8.0 | 9000 | 1.5316 | 0.3437 | 0.2435 | 0.2719 |
| 0.9732 | 9.0 | 10125 | 1.5338 | 0.3464 | 0.2446 | 0.2732 |
| 0.954 | 10.0 | 11250 | 1.5363 | 0.3459 | 0.2455 | 0.2731 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-large-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-15
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4822
- Rouge2 Precision: 0.7578
- Rouge2 Recall: 0.5933
- Rouge2 Fmeasure: 0.6511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.7006 | 1.0 | 663 | 0.5062 | 0.7492 | 0.5855 | 0.6434 |
| 0.5709 | 2.0 | 1326 | 0.4811 | 0.7487 | 0.5879 | 0.6447 |
| 0.5011 | 3.0 | 1989 | 0.4734 | 0.7541 | 0.5906 | 0.6483 |
| 0.4164 | 4.0 | 2652 | 0.4705 | 0.7515 | 0.5876 | 0.6452 |
| 0.3888 | 5.0 | 3315 | 0.4703 | 0.7555 | 0.5946 | 0.6515 |
| 0.3655 | 6.0 | 3978 | 0.4725 | 0.7572 | 0.5943 | 0.6516 |
| 0.319 | 7.0 | 4641 | 0.4733 | 0.7557 | 0.5911 | 0.6491 |
| 0.3089 | 8.0 | 5304 | 0.4792 | 0.7577 | 0.5936 | 0.6513 |
| 0.2907 | 9.0 | 5967 | 0.4799 | 0.7577 | 0.5931 | 0.6509 |
| 0.275 | 10.0 | 6630 | 0.4822 | 0.7578 | 0.5933 | 0.6511 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-35
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9359
- Rouge2 Precision: 0.5451
- Rouge2 Recall: 0.4232
- Rouge2 Fmeasure: 0.4666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.4156 | 1.0 | 663 | 1.0366 | 0.5165 | 0.3967 | 0.4394 |
| 1.1773 | 2.0 | 1326 | 0.9841 | 0.5354 | 0.4168 | 0.4589 |
| 1.0894 | 3.0 | 1989 | 0.9554 | 0.5346 | 0.4133 | 0.4563 |
| 0.9359 | 4.0 | 2652 | 0.9440 | 0.5357 | 0.4163 | 0.4587 |
| 0.8758 | 5.0 | 3315 | 0.9340 | 0.5428 | 0.4226 | 0.465 |
| 0.8549 | 6.0 | 3978 | 0.9337 | 0.5385 | 0.422 | 0.4634 |
| 0.7743 | 7.0 | 4641 | 0.9330 | 0.542 | 0.422 | 0.4647 |
| 0.7465 | 8.0 | 5304 | 0.9315 | 0.5428 | 0.4231 | 0.4654 |
| 0.7348 | 9.0 | 5967 | 0.9344 | 0.5462 | 0.4244 | 0.4674 |
| 0.7062 | 10.0 | 6630 | 0.9359 | 0.5451 | 0.4232 | 0.4666 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-45
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1797
- Rouge2 Precision: 0.4333
- Rouge2 Recall: 0.3331
- Rouge2 Fmeasure: 0.3684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.7989 | 1.0 | 663 | 1.3385 | 0.4097 | 0.3086 | 0.3444 |
| 1.5072 | 2.0 | 1326 | 1.2582 | 0.4218 | 0.3213 | 0.3569 |
| 1.4023 | 3.0 | 1989 | 1.2236 | 0.4207 | 0.3211 | 0.3562 |
| 1.2205 | 4.0 | 2652 | 1.2025 | 0.4359 | 0.3331 | 0.3696 |
| 1.1584 | 5.0 | 3315 | 1.1910 | 0.4304 | 0.3307 | 0.3658 |
| 1.1239 | 6.0 | 3978 | 1.1830 | 0.4247 | 0.3279 | 0.3618 |
| 1.0384 | 7.0 | 4641 | 1.1761 | 0.4308 | 0.3325 | 0.367 |
| 1.0168 | 8.0 | 5304 | 1.1762 | 0.4314 | 0.3336 | 0.368 |
| 0.9966 | 9.0 | 5967 | 1.1773 | 0.4335 | 0.3341 | 0.369 |
| 0.961 | 10.0 | 6630 | 1.1797 | 0.4333 | 0.3331 | 0.3684 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed-medterm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed-medterm
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Rouge2 Precision: 0.985
- Rouge2 Recall: 0.7208
- Rouge2 Fmeasure: 0.8088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:------:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.0018 | 1.0 | 13833 | 0.0003 | 0.985 | 0.7208 | 0.8088 |
| 0.0014 | 2.0 | 27666 | 0.0006 | 0.9848 | 0.7207 | 0.8086 |
| 0.0009 | 3.0 | 41499 | 0.0002 | 0.9848 | 0.7207 | 0.8086 |
| 0.0007 | 4.0 | 55332 | 0.0002 | 0.985 | 0.7208 | 0.8088 |
| 0.0006 | 5.0 | 69165 | 0.0001 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 6.0 | 82998 | 0.0002 | 0.9846 | 0.7206 | 0.8086 |
| 0.0009 | 7.0 | 96831 | 0.0001 | 0.9848 | 0.7208 | 0.8087 |
| 0.0 | 8.0 | 110664 | 0.0000 | 0.9848 | 0.7207 | 0.8087 |
| 0.0001 | 9.0 | 124497 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
| 0.0 | 10.0 | 138330 | 0.0000 | 0.985 | 0.7208 | 0.8088 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7223
- Rouge2 Precision: 0.6572
- Rouge2 Recall: 0.5164
- Rouge2 Fmeasure: 0.5662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.0322 | 1.0 | 663 | 0.7891 | 0.639 | 0.4989 | 0.5491 |
| 0.8545 | 2.0 | 1326 | 0.7433 | 0.6461 | 0.5057 | 0.5556 |
| 0.758 | 3.0 | 1989 | 0.7299 | 0.647 | 0.5033 | 0.5547 |
| 0.6431 | 4.0 | 2652 | 0.7185 | 0.6556 | 0.5101 | 0.5616 |
| 0.6058 | 5.0 | 3315 | 0.7126 | 0.6537 | 0.5144 | 0.5638 |
| 0.5726 | 6.0 | 3978 | 0.7117 | 0.6567 | 0.5169 | 0.5666 |
| 0.5168 | 7.0 | 4641 | 0.7150 | 0.6585 | 0.5154 | 0.566 |
| 0.5011 | 8.0 | 5304 | 0.7220 | 0.6568 | 0.5164 | 0.5664 |
| 0.4803 | 9.0 | 5967 | 0.7208 | 0.6573 | 0.5161 | 0.5662 |
| 0.4577 | 10.0 | 6630 | 0.7223 | 0.6572 | 0.5164 | 0.5662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-pubmed-1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-pubmed-1.1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4236
- Rouge2 Precision: 0.8482
- Rouge2 Recall: 0.673
- Rouge2 Fmeasure: 0.7347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6534 | 1.0 | 663 | 0.4641 | 0.8448 | 0.6691 | 0.7313 |
| 0.5078 | 2.0 | 1326 | 0.4398 | 0.8457 | 0.6719 | 0.7333 |
| 0.4367 | 3.0 | 1989 | 0.4274 | 0.847 | 0.6717 | 0.7335 |
| 0.3575 | 4.0 | 2652 | 0.4149 | 0.8481 | 0.6733 | 0.735 |
| 0.3319 | 5.0 | 3315 | 0.4170 | 0.8481 | 0.6724 | 0.7343 |
| 0.3179 | 6.0 | 3978 | 0.4264 | 0.8484 | 0.6733 | 0.735 |
| 0.2702 | 7.0 | 4641 | 0.4207 | 0.8489 | 0.6732 | 0.7353 |
| 0.2606 | 8.0 | 5304 | 0.4205 | 0.8487 | 0.6725 | 0.7347 |
| 0.2496 | 9.0 | 5967 | 0.4247 | 0.8466 | 0.6717 | 0.7334 |
| 0.2353 | 10.0 | 6630 | 0.4236 | 0.8482 | 0.673 | 0.7347 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6340
- Rouge2 Precision: 0.83
- Rouge2 Recall: 0.6526
- Rouge2 Fmeasure: 0.7144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6613 | 1.0 | 663 | 0.4750 | 0.8321 | 0.6552 | 0.7167 |
| 0.4993 | 2.0 | 1326 | 0.4404 | 0.8366 | 0.6583 | 0.7203 |
| 0.443 | 3.0 | 1989 | 0.4261 | 0.8319 | 0.6562 | 0.7176 |
| 0.3482 | 4.0 | 2652 | 0.4198 | 0.8348 | 0.6571 | 0.7191 |
| 0.3206 | 5.0 | 3315 | 0.4233 | 0.8344 | 0.656 | 0.7183 |
| 0.294 | 6.0 | 3978 | 0.4334 | 0.835 | 0.657 | 0.719 |
| 0.2404 | 7.0 | 4641 | 0.4437 | 0.8334 | 0.6559 | 0.7178 |
| 0.2228 | 8.0 | 5304 | 0.4438 | 0.8348 | 0.6565 | 0.7187 |
| 0.211 | 9.0 | 5967 | 0.4516 | 0.8329 | 0.6549 | 0.717 |
| 0.1713 | 10.0 | 6630 | 0.4535 | 0.8332 | 0.6547 | 0.7169 |
| 0.1591 | 11.0 | 7293 | 0.4763 | 0.8349 | 0.6561 | 0.7184 |
| 0.1555 | 12.0 | 7956 | 0.4824 | 0.8311 | 0.6534 | 0.7153 |
| 0.1262 | 13.0 | 8619 | 0.4883 | 0.8322 | 0.655 | 0.7167 |
| 0.1164 | 14.0 | 9282 | 0.5025 | 0.8312 | 0.6539 | 0.7158 |
| 0.1108 | 15.0 | 9945 | 0.5149 | 0.8321 | 0.6535 | 0.7157 |
| 0.0926 | 16.0 | 10608 | 0.5340 | 0.8315 | 0.6544 | 0.7159 |
| 0.0856 | 17.0 | 11271 | 0.5322 | 0.8306 | 0.6518 | 0.7142 |
| 0.0785 | 18.0 | 11934 | 0.5346 | 0.8324 | 0.6549 | 0.7167 |
| 0.071 | 19.0 | 12597 | 0.5488 | 0.8311 | 0.652 | 0.714 |
| 0.0635 | 20.0 | 13260 | 0.5624 | 0.8287 | 0.6517 | 0.7132 |
| 0.0608 | 21.0 | 13923 | 0.5612 | 0.8299 | 0.6527 | 0.7141 |
| 0.0531 | 22.0 | 14586 | 0.5764 | 0.8283 | 0.6498 | 0.7119 |
| 0.0486 | 23.0 | 15249 | 0.5832 | 0.8298 | 0.6532 | 0.7148 |
| 0.0465 | 24.0 | 15912 | 0.5866 | 0.83 | 0.6522 | 0.7142 |
| 0.0418 | 25.0 | 16575 | 0.5825 | 0.83 | 0.6523 | 0.7141 |
| 0.0391 | 26.0 | 17238 | 0.5997 | 0.8306 | 0.6545 | 0.716 |
| 0.0376 | 27.0 | 17901 | 0.5894 | 0.8315 | 0.6546 | 0.7164 |
| 0.035 | 28.0 | 18564 | 0.6045 | 0.8306 | 0.6529 | 0.7149 |
| 0.0316 | 29.0 | 19227 | 0.6168 | 0.8311 | 0.6546 | 0.7162 |
| 0.0314 | 30.0 | 19890 | 0.6203 | 0.8311 | 0.6552 | 0.7164 |
| 0.0292 | 31.0 | 20553 | 0.6173 | 0.8315 | 0.6548 | 0.7163 |
| 0.0265 | 32.0 | 21216 | 0.6226 | 0.832 | 0.6548 | 0.7166 |
| 0.0274 | 33.0 | 21879 | 0.6264 | 0.8314 | 0.6538 | 0.7155 |
| 0.0247 | 34.0 | 22542 | 0.6254 | 0.8289 | 0.6515 | 0.7132 |
| 0.0238 | 35.0 | 23205 | 0.6254 | 0.8307 | 0.6519 | 0.7142 |
| 0.0232 | 36.0 | 23868 | 0.6295 | 0.8287 | 0.6515 | 0.7133 |
| 0.0215 | 37.0 | 24531 | 0.6326 | 0.8293 | 0.6523 | 0.7138 |
| 0.0212 | 38.0 | 25194 | 0.6332 | 0.8295 | 0.6522 | 0.714 |
| 0.0221 | 39.0 | 25857 | 0.6335 | 0.8305 | 0.6528 | 0.7147 |
| 0.0202 | 40.0 | 26520 | 0.6340 | 0.83 | 0.6526 | 0.7144 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-pubmed
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6131
- Rouge2 Precision: 0.3
- Rouge2 Recall: 0.2152
- Rouge2 Fmeasure: 0.2379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 2.1335 | 1.0 | 563 | 1.7632 | 0.2716 | 0.1936 | 0.2135 |
| 1.9373 | 2.0 | 1126 | 1.7037 | 0.2839 | 0.2068 | 0.2265 |
| 1.8827 | 3.0 | 1689 | 1.6723 | 0.2901 | 0.2118 | 0.2316 |
| 1.8257 | 4.0 | 2252 | 1.6503 | 0.2938 | 0.2115 | 0.2332 |
| 1.8152 | 5.0 | 2815 | 1.6386 | 0.2962 | 0.2139 | 0.2357 |
| 1.7939 | 6.0 | 3378 | 1.6284 | 0.2976 | 0.212 | 0.2354 |
| 1.7845 | 7.0 | 3941 | 1.6211 | 0.2991 | 0.2155 | 0.2383 |
| 1.7468 | 8.0 | 4504 | 1.6167 | 0.2994 | 0.217 | 0.239 |
| 1.7464 | 9.0 | 5067 | 1.6137 | 0.3007 | 0.2154 | 0.2382 |
| 1.744 | 10.0 | 5630 | 1.6131 | 0.3 | 0.2152 | 0.2379 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-pubmed-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-pubmed-15
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5389
- Rouge2 Precision: 0.7165
- Rouge2 Recall: 0.5375
- Rouge2 Fmeasure: 0.5981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.1024 | 0.75 | 500 | 0.7890 | 0.6854 | 0.4813 | 0.5502 |
| 0.8788 | 1.51 | 1000 | 0.7176 | 0.6906 | 0.4989 | 0.5638 |
| 0.8086 | 2.26 | 1500 | 0.6830 | 0.6872 | 0.5052 | 0.5663 |
| 0.7818 | 3.02 | 2000 | 0.6650 | 0.6912 | 0.5104 | 0.5711 |
| 0.7466 | 3.77 | 2500 | 0.6458 | 0.6965 | 0.5167 | 0.5774 |
| 0.731 | 4.52 | 3000 | 0.6355 | 0.6955 | 0.5161 | 0.5763 |
| 0.7126 | 5.28 | 3500 | 0.6249 | 0.6924 | 0.517 | 0.576 |
| 0.6998 | 6.03 | 4000 | 0.6166 | 0.6995 | 0.5207 | 0.5809 |
| 0.6855 | 6.79 | 4500 | 0.6076 | 0.6981 | 0.5215 | 0.5813 |
| 0.676 | 7.54 | 5000 | 0.6015 | 0.7003 | 0.5242 | 0.5836 |
| 0.6688 | 8.3 | 5500 | 0.5962 | 0.7004 | 0.5235 | 0.583 |
| 0.6569 | 9.05 | 6000 | 0.5900 | 0.6997 | 0.5234 | 0.5827 |
| 0.6503 | 9.8 | 6500 | 0.5880 | 0.703 | 0.5257 | 0.5856 |
| 0.6455 | 10.56 | 7000 | 0.5818 | 0.7008 | 0.5259 | 0.5849 |
| 0.635 | 11.31 | 7500 | 0.5796 | 0.7017 | 0.5271 | 0.5861 |
| 0.6323 | 12.07 | 8000 | 0.5769 | 0.7053 | 0.5276 | 0.5877 |
| 0.6241 | 12.82 | 8500 | 0.5730 | 0.7011 | 0.5243 | 0.5838 |
| 0.6224 | 13.57 | 9000 | 0.5696 | 0.7046 | 0.5286 | 0.5879 |
| 0.6139 | 14.33 | 9500 | 0.5685 | 0.7047 | 0.5295 | 0.5886 |
| 0.6118 | 15.08 | 10000 | 0.5653 | 0.704 | 0.5297 | 0.5886 |
| 0.6089 | 15.84 | 10500 | 0.5633 | 0.703 | 0.5272 | 0.5865 |
| 0.598 | 16.59 | 11000 | 0.5613 | 0.7059 | 0.5293 | 0.5889 |
| 0.6003 | 17.35 | 11500 | 0.5602 | 0.7085 | 0.532 | 0.5918 |
| 0.5981 | 18.1 | 12000 | 0.5587 | 0.7106 | 0.5339 | 0.5938 |
| 0.5919 | 18.85 | 12500 | 0.5556 | 0.708 | 0.5319 | 0.5914 |
| 0.5897 | 19.61 | 13000 | 0.5556 | 0.7106 | 0.5327 | 0.5931 |
| 0.5899 | 20.36 | 13500 | 0.5526 | 0.7114 | 0.534 | 0.5939 |
| 0.5804 | 21.12 | 14000 | 0.5521 | 0.7105 | 0.5328 | 0.5928 |
| 0.5764 | 21.87 | 14500 | 0.5520 | 0.715 | 0.537 | 0.5976 |
| 0.5793 | 22.62 | 15000 | 0.5506 | 0.713 | 0.5346 | 0.5951 |
| 0.5796 | 23.38 | 15500 | 0.5492 | 0.7124 | 0.5352 | 0.5952 |
| 0.5672 | 24.13 | 16000 | 0.5482 | 0.7124 | 0.5346 | 0.5948 |
| 0.5737 | 24.89 | 16500 | 0.5470 | 0.7134 | 0.5352 | 0.5956 |
| 0.5685 | 25.64 | 17000 | 0.5463 | 0.7117 | 0.5346 | 0.5946 |
| 0.5658 | 26.4 | 17500 | 0.5457 | 0.7145 | 0.5359 | 0.5965 |
| 0.5657 | 27.15 | 18000 | 0.5447 | 0.7145 | 0.5367 | 0.597 |
| 0.5645 | 27.9 | 18500 | 0.5441 | 0.7141 | 0.5362 | 0.5964 |
| 0.565 | 28.66 | 19000 | 0.5436 | 0.7151 | 0.5367 | 0.5972 |
| 0.5579 | 29.41 | 19500 | 0.5426 | 0.7162 | 0.5378 | 0.5982 |
| 0.563 | 30.17 | 20000 | 0.5424 | 0.7155 | 0.5373 | 0.5977 |
| 0.556 | 30.92 | 20500 | 0.5418 | 0.7148 | 0.536 | 0.5966 |
| 0.5576 | 31.67 | 21000 | 0.5411 | 0.7141 | 0.5356 | 0.5961 |
| 0.5546 | 32.43 | 21500 | 0.5409 | 0.7149 | 0.5364 | 0.5967 |
| 0.556 | 33.18 | 22000 | 0.5405 | 0.7143 | 0.5356 | 0.596 |
| 0.5536 | 33.94 | 22500 | 0.5401 | 0.7165 | 0.5377 | 0.5982 |
| 0.5527 | 34.69 | 23000 | 0.5397 | 0.7188 | 0.5389 | 0.5999 |
| 0.5531 | 35.44 | 23500 | 0.5395 | 0.7172 | 0.538 | 0.5989 |
| 0.5508 | 36.2 | 24000 | 0.5392 | 0.7166 | 0.538 | 0.5985 |
| 0.5495 | 36.95 | 24500 | 0.5391 | 0.7176 | 0.5387 | 0.5993 |
| 0.5539 | 37.71 | 25000 | 0.5391 | 0.7169 | 0.5372 | 0.598 |
| 0.5452 | 38.46 | 25500 | 0.5390 | 0.7179 | 0.5384 | 0.5991 |
| 0.5513 | 39.22 | 26000 | 0.5390 | 0.717 | 0.5377 | 0.5984 |
| 0.5506 | 39.97 | 26500 | 0.5389 | 0.7165 | 0.5375 | 0.5981 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-pubmed-35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-pubmed-35
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1101
- Rouge2 Precision: 0.4758
- Rouge2 Recall: 0.3498
- Rouge2 Fmeasure: 0.3927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.8404 | 0.75 | 500 | 1.5005 | 0.4265 | 0.2786 | 0.3273 |
| 1.6858 | 1.51 | 1000 | 1.4216 | 0.4318 | 0.2946 | 0.3404 |
| 1.6071 | 2.26 | 1500 | 1.3777 | 0.4472 | 0.3148 | 0.3598 |
| 1.5551 | 3.02 | 2000 | 1.3360 | 0.4406 | 0.3168 | 0.3586 |
| 1.5116 | 3.77 | 2500 | 1.3128 | 0.4523 | 0.3234 | 0.3671 |
| 1.4837 | 4.52 | 3000 | 1.2937 | 0.4477 | 0.3215 | 0.3645 |
| 1.4513 | 5.28 | 3500 | 1.2766 | 0.4511 | 0.3262 | 0.3689 |
| 1.4336 | 6.03 | 4000 | 1.2626 | 0.4548 | 0.3283 | 0.3718 |
| 1.4149 | 6.79 | 4500 | 1.2449 | 0.4495 | 0.3274 | 0.3687 |
| 1.3977 | 7.54 | 5000 | 1.2349 | 0.4507 | 0.3305 | 0.3712 |
| 1.3763 | 8.3 | 5500 | 1.2239 | 0.4519 | 0.3266 | 0.3688 |
| 1.371 | 9.05 | 6000 | 1.2171 | 0.4546 | 0.3305 | 0.3727 |
| 1.3501 | 9.8 | 6500 | 1.2080 | 0.4575 | 0.3329 | 0.3755 |
| 1.3443 | 10.56 | 7000 | 1.2017 | 0.4576 | 0.3314 | 0.3742 |
| 1.326 | 11.31 | 7500 | 1.1926 | 0.4578 | 0.333 | 0.3757 |
| 1.3231 | 12.07 | 8000 | 1.1866 | 0.4606 | 0.3357 | 0.3782 |
| 1.3089 | 12.82 | 8500 | 1.1816 | 0.4591 | 0.3338 | 0.3765 |
| 1.3007 | 13.57 | 9000 | 1.1764 | 0.4589 | 0.3361 | 0.3777 |
| 1.2943 | 14.33 | 9500 | 1.1717 | 0.4641 | 0.3382 | 0.3811 |
| 1.2854 | 15.08 | 10000 | 1.1655 | 0.4617 | 0.3378 | 0.38 |
| 1.2777 | 15.84 | 10500 | 1.1612 | 0.464 | 0.3401 | 0.3823 |
| 1.2684 | 16.59 | 11000 | 1.1581 | 0.4608 | 0.3367 | 0.3789 |
| 1.2612 | 17.35 | 11500 | 1.1554 | 0.4623 | 0.3402 | 0.3818 |
| 1.2625 | 18.1 | 12000 | 1.1497 | 0.4613 | 0.3381 | 0.3802 |
| 1.2529 | 18.85 | 12500 | 1.1465 | 0.4671 | 0.3419 | 0.3848 |
| 1.2461 | 19.61 | 13000 | 1.1431 | 0.4646 | 0.3399 | 0.3824 |
| 1.2415 | 20.36 | 13500 | 1.1419 | 0.4659 | 0.341 | 0.3835 |
| 1.2375 | 21.12 | 14000 | 1.1377 | 0.4693 | 0.3447 | 0.3873 |
| 1.2315 | 21.87 | 14500 | 1.1353 | 0.4672 | 0.3433 | 0.3855 |
| 1.2263 | 22.62 | 15000 | 1.1333 | 0.467 | 0.3433 | 0.3854 |
| 1.2214 | 23.38 | 15500 | 1.1305 | 0.4682 | 0.3446 | 0.3869 |
| 1.2202 | 24.13 | 16000 | 1.1291 | 0.4703 | 0.3465 | 0.3888 |
| 1.2155 | 24.89 | 16500 | 1.1270 | 0.472 | 0.348 | 0.3903 |
| 1.2064 | 25.64 | 17000 | 1.1261 | 0.4724 | 0.3479 | 0.3905 |
| 1.2173 | 26.4 | 17500 | 1.1236 | 0.4734 | 0.3485 | 0.3912 |
| 1.1994 | 27.15 | 18000 | 1.1220 | 0.4739 | 0.3486 | 0.3915 |
| 1.2018 | 27.9 | 18500 | 1.1217 | 0.4747 | 0.3489 | 0.3921 |
| 1.2045 | 28.66 | 19000 | 1.1194 | 0.4735 | 0.3488 | 0.3916 |
| 1.1949 | 29.41 | 19500 | 1.1182 | 0.4732 | 0.3484 | 0.3911 |
| 1.19 | 30.17 | 20000 | 1.1166 | 0.4724 | 0.3479 | 0.3904 |
| 1.1932 | 30.92 | 20500 | 1.1164 | 0.4753 | 0.3494 | 0.3924 |
| 1.1952 | 31.67 | 21000 | 1.1147 | 0.4733 | 0.3485 | 0.3911 |
| 1.1922 | 32.43 | 21500 | 1.1146 | 0.475 | 0.3494 | 0.3923 |
| 1.1889 | 33.18 | 22000 | 1.1132 | 0.4765 | 0.3499 | 0.3933 |
| 1.1836 | 33.94 | 22500 | 1.1131 | 0.4768 | 0.351 | 0.3939 |
| 1.191 | 34.69 | 23000 | 1.1127 | 0.4755 | 0.3495 | 0.3926 |
| 1.1811 | 35.44 | 23500 | 1.1113 | 0.4748 | 0.349 | 0.3919 |
| 1.1864 | 36.2 | 24000 | 1.1107 | 0.4751 | 0.3494 | 0.3921 |
| 1.1789 | 36.95 | 24500 | 1.1103 | 0.4756 | 0.3499 | 0.3927 |
| 1.1819 | 37.71 | 25000 | 1.1101 | 0.4758 | 0.35 | 0.3932 |
| 1.1862 | 38.46 | 25500 | 1.1099 | 0.4755 | 0.3497 | 0.3926 |
| 1.1764 | 39.22 | 26000 | 1.1101 | 0.4759 | 0.3498 | 0.3928 |
| 1.1819 | 39.97 | 26500 | 1.1101 | 0.4758 | 0.3498 | 0.3927 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-pubmed-45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-pubmed-45
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6395
- Rouge2 Precision: 0.3383
- Rouge2 Recall: 0.2424
- Rouge2 Fmeasure: 0.2753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 2.519 | 0.75 | 500 | 1.9659 | 0.3178 | 0.1888 | 0.2299 |
| 2.169 | 1.51 | 1000 | 1.8450 | 0.3256 | 0.2138 | 0.25 |
| 2.0796 | 2.26 | 1500 | 1.7900 | 0.3368 | 0.2265 | 0.2636 |
| 1.9978 | 3.02 | 2000 | 1.7553 | 0.3427 | 0.234 | 0.2709 |
| 1.9686 | 3.77 | 2500 | 1.7172 | 0.3356 | 0.2347 | 0.2692 |
| 1.9142 | 4.52 | 3000 | 1.6986 | 0.3358 | 0.238 | 0.2715 |
| 1.921 | 5.28 | 3500 | 1.6770 | 0.3349 | 0.2379 | 0.2709 |
| 1.8848 | 6.03 | 4000 | 1.6683 | 0.3346 | 0.2379 | 0.2708 |
| 1.8674 | 6.79 | 4500 | 1.6606 | 0.3388 | 0.2419 | 0.2752 |
| 1.8606 | 7.54 | 5000 | 1.6514 | 0.3379 | 0.2409 | 0.274 |
| 1.8515 | 8.3 | 5500 | 1.6438 | 0.3356 | 0.2407 | 0.2731 |
| 1.8403 | 9.05 | 6000 | 1.6401 | 0.3367 | 0.2421 | 0.2744 |
| 1.8411 | 9.8 | 6500 | 1.6395 | 0.3383 | 0.2424 | 0.2753 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-pubmed
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8008
- Rouge2 Precision: 0.6071
- Rouge2 Recall: 0.4566
- Rouge2 Fmeasure: 0.5079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.914 | 0.75 | 500 | 0.8691 | 0.5901 | 0.4357 | 0.4879 |
| 0.9093 | 1.51 | 1000 | 0.8646 | 0.5867 | 0.4372 | 0.488 |
| 0.895 | 2.26 | 1500 | 0.8618 | 0.5891 | 0.4387 | 0.49 |
| 0.8842 | 3.02 | 2000 | 0.8571 | 0.5899 | 0.4374 | 0.4891 |
| 0.8796 | 3.77 | 2500 | 0.8544 | 0.5903 | 0.4406 | 0.4916 |
| 0.8759 | 4.52 | 3000 | 0.8513 | 0.5921 | 0.4395 | 0.4912 |
| 0.8621 | 5.28 | 3500 | 0.8485 | 0.5934 | 0.4413 | 0.493 |
| 0.8613 | 6.03 | 4000 | 0.8442 | 0.5944 | 0.4428 | 0.4944 |
| 0.8537 | 6.79 | 4500 | 0.8406 | 0.594 | 0.4414 | 0.4932 |
| 0.8518 | 7.54 | 5000 | 0.8399 | 0.5956 | 0.4424 | 0.4945 |
| 0.8438 | 8.3 | 5500 | 0.8365 | 0.5953 | 0.4452 | 0.4964 |
| 0.8339 | 9.05 | 6000 | 0.8353 | 0.5983 | 0.4468 | 0.4983 |
| 0.8307 | 9.8 | 6500 | 0.8331 | 0.5979 | 0.4461 | 0.4976 |
| 0.8328 | 10.56 | 7000 | 0.8304 | 0.5975 | 0.4465 | 0.4979 |
| 0.8263 | 11.31 | 7500 | 0.8283 | 0.5977 | 0.4467 | 0.4981 |
| 0.8168 | 12.07 | 8000 | 0.8267 | 0.5971 | 0.4463 | 0.4976 |
| 0.8165 | 12.82 | 8500 | 0.8248 | 0.5969 | 0.4462 | 0.4976 |
| 0.8084 | 13.57 | 9000 | 0.8245 | 0.6018 | 0.4527 | 0.5035 |
| 0.8136 | 14.33 | 9500 | 0.8219 | 0.6023 | 0.4509 | 0.5023 |
| 0.8073 | 15.08 | 10000 | 0.8206 | 0.6002 | 0.4486 | 0.5001 |
| 0.808 | 15.84 | 10500 | 0.8185 | 0.6009 | 0.4506 | 0.5019 |
| 0.8027 | 16.59 | 11000 | 0.8173 | 0.5978 | 0.4478 | 0.4989 |
| 0.8061 | 17.35 | 11500 | 0.8169 | 0.6022 | 0.4513 | 0.5026 |
| 0.7922 | 18.1 | 12000 | 0.8152 | 0.6016 | 0.4501 | 0.5016 |
| 0.7928 | 18.85 | 12500 | 0.8141 | 0.6009 | 0.45 | 0.5012 |
| 0.7909 | 19.61 | 13000 | 0.8143 | 0.6019 | 0.4521 | 0.5028 |
| 0.7909 | 20.36 | 13500 | 0.8115 | 0.5997 | 0.4505 | 0.5011 |
| 0.7949 | 21.12 | 14000 | 0.8115 | 0.6043 | 0.4536 | 0.5048 |
| 0.7853 | 21.87 | 14500 | 0.8095 | 0.6033 | 0.4527 | 0.5038 |
| 0.7819 | 22.62 | 15000 | 0.8095 | 0.6054 | 0.4541 | 0.5056 |
| 0.7828 | 23.38 | 15500 | 0.8075 | 0.6036 | 0.453 | 0.5042 |
| 0.787 | 24.13 | 16000 | 0.8068 | 0.6031 | 0.4528 | 0.504 |
| 0.7739 | 24.89 | 16500 | 0.8072 | 0.6043 | 0.4529 | 0.5045 |
| 0.7782 | 25.64 | 17000 | 0.8073 | 0.606 | 0.4551 | 0.5063 |
| 0.7772 | 26.4 | 17500 | 0.8063 | 0.6055 | 0.4549 | 0.5062 |
| 0.7718 | 27.15 | 18000 | 0.8057 | 0.606 | 0.4546 | 0.5059 |
| 0.7747 | 27.9 | 18500 | 0.8045 | 0.6046 | 0.4543 | 0.5054 |
| 0.7738 | 28.66 | 19000 | 0.8035 | 0.6059 | 0.4549 | 0.506 |
| 0.7642 | 29.41 | 19500 | 0.8041 | 0.6053 | 0.4545 | 0.5058 |
| 0.7666 | 30.17 | 20000 | 0.8039 | 0.6066 | 0.457 | 0.508 |
| 0.7686 | 30.92 | 20500 | 0.8027 | 0.6075 | 0.4571 | 0.5081 |
| 0.7664 | 31.67 | 21000 | 0.8026 | 0.6062 | 0.4566 | 0.5076 |
| 0.77 | 32.43 | 21500 | 0.8022 | 0.6068 | 0.4571 | 0.5081 |
| 0.7618 | 33.18 | 22000 | 0.8015 | 0.6065 | 0.4563 | 0.5072 |
| 0.7615 | 33.94 | 22500 | 0.8013 | 0.6064 | 0.4565 | 0.5074 |
| 0.7611 | 34.69 | 23000 | 0.8017 | 0.607 | 0.4567 | 0.5078 |
| 0.7611 | 35.44 | 23500 | 0.8013 | 0.608 | 0.4565 | 0.5082 |
| 0.7604 | 36.2 | 24000 | 0.8012 | 0.6069 | 0.4561 | 0.5072 |
| 0.7599 | 36.95 | 24500 | 0.8013 | 0.6078 | 0.4571 | 0.5085 |
| 0.7542 | 37.71 | 25000 | 0.8016 | 0.6083 | 0.4579 | 0.5091 |
| 0.7637 | 38.46 | 25500 | 0.8009 | 0.6072 | 0.4569 | 0.5081 |
| 0.7596 | 39.22 | 26000 | 0.8008 | 0.6069 | 0.4566 | 0.5078 |
| 0.7604 | 39.97 | 26500 | 0.8008 | 0.6071 | 0.4566 | 0.5079 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-paraphrase-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-paraphrase-pubmed
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4032
- Rouge2 Precision: 0.8281
- Rouge2 Recall: 0.6346
- Rouge2 Fmeasure: 0.6996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.5253 | 1.0 | 663 | 0.4895 | 0.8217 | 0.6309 | 0.695 |
| 0.5385 | 2.0 | 1326 | 0.4719 | 0.822 | 0.6307 | 0.6953 |
| 0.5255 | 3.0 | 1989 | 0.4579 | 0.8225 | 0.631 | 0.6954 |
| 0.4927 | 4.0 | 2652 | 0.4510 | 0.824 | 0.6315 | 0.6965 |
| 0.484 | 5.0 | 3315 | 0.4426 | 0.8254 | 0.6323 | 0.6974 |
| 0.4691 | 6.0 | 3978 | 0.4383 | 0.8241 | 0.6311 | 0.6962 |
| 0.4546 | 7.0 | 4641 | 0.4319 | 0.8248 | 0.6322 | 0.6969 |
| 0.4431 | 8.0 | 5304 | 0.4270 | 0.8254 | 0.633 | 0.6977 |
| 0.4548 | 9.0 | 5967 | 0.4257 | 0.8257 | 0.6322 | 0.6976 |
| 0.4335 | 10.0 | 6630 | 0.4241 | 0.8271 | 0.6333 | 0.6986 |
| 0.4234 | 11.0 | 7293 | 0.4203 | 0.827 | 0.6341 | 0.6992 |
| 0.433 | 12.0 | 7956 | 0.4185 | 0.8279 | 0.6347 | 0.6998 |
| 0.4108 | 13.0 | 8619 | 0.4161 | 0.8285 | 0.6352 | 0.7004 |
| 0.4101 | 14.0 | 9282 | 0.4133 | 0.8289 | 0.6356 | 0.7008 |
| 0.4155 | 15.0 | 9945 | 0.4149 | 0.8279 | 0.635 | 0.6998 |
| 0.3991 | 16.0 | 10608 | 0.4124 | 0.8289 | 0.6353 | 0.7005 |
| 0.3962 | 17.0 | 11271 | 0.4113 | 0.829 | 0.6353 | 0.7006 |
| 0.3968 | 18.0 | 11934 | 0.4114 | 0.8285 | 0.6352 | 0.7002 |
| 0.3962 | 19.0 | 12597 | 0.4100 | 0.8282 | 0.6346 | 0.6998 |
| 0.3771 | 20.0 | 13260 | 0.4078 | 0.829 | 0.6352 | 0.7005 |
| 0.3902 | 21.0 | 13923 | 0.4083 | 0.8295 | 0.6351 | 0.7006 |
| 0.3811 | 22.0 | 14586 | 0.4077 | 0.8276 | 0.6346 | 0.6995 |
| 0.38 | 23.0 | 15249 | 0.4076 | 0.8281 | 0.6346 | 0.6997 |
| 0.3695 | 24.0 | 15912 | 0.4059 | 0.8277 | 0.6344 | 0.6993 |
| 0.3665 | 25.0 | 16575 | 0.4043 | 0.8278 | 0.6343 | 0.6992 |
| 0.3728 | 26.0 | 17238 | 0.4059 | 0.8279 | 0.6346 | 0.6994 |
| 0.3669 | 27.0 | 17901 | 0.4048 | 0.8271 | 0.6342 | 0.6991 |
| 0.3702 | 28.0 | 18564 | 0.4058 | 0.8265 | 0.6338 | 0.6985 |
| 0.3674 | 29.0 | 19227 | 0.4049 | 0.8277 | 0.6345 | 0.6993 |
| 0.364 | 30.0 | 19890 | 0.4048 | 0.8273 | 0.6341 | 0.699 |
| 0.3618 | 31.0 | 20553 | 0.4041 | 0.828 | 0.6349 | 0.6997 |
| 0.3609 | 32.0 | 21216 | 0.4040 | 0.8275 | 0.6346 | 0.6994 |
| 0.357 | 33.0 | 21879 | 0.4037 | 0.8278 | 0.6348 | 0.6996 |
| 0.3638 | 34.0 | 22542 | 0.4038 | 0.8275 | 0.634 | 0.6989 |
| 0.3551 | 35.0 | 23205 | 0.4035 | 0.8275 | 0.6344 | 0.6992 |
| 0.358 | 36.0 | 23868 | 0.4035 | 0.8279 | 0.6347 | 0.6995 |
| 0.3519 | 37.0 | 24531 | 0.4034 | 0.8277 | 0.6343 | 0.6992 |
| 0.359 | 38.0 | 25194 | 0.4035 | 0.8281 | 0.6346 | 0.6996 |
| 0.3542 | 39.0 | 25857 | 0.4033 | 0.8281 | 0.6346 | 0.6996 |
| 0.3592 | 40.0 | 26520 | 0.4032 | 0.8281 | 0.6346 | 0.6996 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9233262687967644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.923
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8217 | 1.0 | 250 | 0.3137 | 0.903 | 0.8999 |
| 0.2484 | 2.0 | 500 | 0.2180 | 0.923 | 0.9233 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- gborn/autonlp-data-news-summarization
co2_eq_emissions: 210.6348731063569
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 483413089
- CO2 Emissions (in grams): 210.6348731063569
## Validation Metrics
- Loss: 1.8478657007217407
- Rouge1: 50.5981
- Rouge2: 26.2167
- RougeL: 46.0513
- RougeLsum: 46.061
- Gen Len: 13.5987
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/gborn/autonlp-news-summarization-483413089
``` |
dccuchile/albert-xxlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5956649094312695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6747
- Matthews Correlation: 0.5957
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4921 | 1.0 | 535 | 0.5283 | 0.5068 |
| 0.2837 | 2.0 | 1070 | 0.5133 | 0.5521 |
| 0.1775 | 3.0 | 1605 | 0.6747 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8410292921074044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5721
- Accuracy: 0.8410
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5323 | 1.0 | 24544 | 0.4431 | 0.8302 |
| 0.3447 | 2.0 | 49088 | 0.4725 | 0.8353 |
| 0.2267 | 3.0 | 73632 | 0.5887 | 0.8368 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.9025641025641027
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7132
- Accuracy: 0.8603
- F1: 0.9026
- Combined Score: 0.8814
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5981 | 1.0 | 230 | 0.4580 | 0.7892 | 0.8562 | 0.8227 |
| 0.3739 | 2.0 | 460 | 0.3806 | 0.8480 | 0.8942 | 0.8711 |
| 0.1991 | 3.0 | 690 | 0.4879 | 0.8529 | 0.8958 | 0.8744 |
| 0.1286 | 4.0 | 920 | 0.6342 | 0.8529 | 0.8986 | 0.8758 |
| 0.0812 | 5.0 | 1150 | 0.7132 | 0.8603 | 0.9026 | 0.8814 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-base-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 586 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9099395936298736
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3986
- Accuracy: 0.9099
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.337 | 1.0 | 6547 | 0.9013 | 0.2448 |
| 0.1971 | 2.0 | 13094 | 0.9143 | 0.2839 |
| 0.1175 | 3.0 | 19641 | 0.9099 | 0.3986 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-large-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 75 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-base-cased-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.9083848627256987
- name: F1
type: f1
value: 0.8767633750332712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-qqp
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Accuracy: 0.9084
- F1: 0.8768
- Combined Score: 0.8926
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.308 | 1.0 | 22741 | 0.2548 | 0.8925 | 0.8556 | 0.8740 |
| 0.201 | 2.0 | 45482 | 0.2881 | 0.9032 | 0.8698 | 0.8865 |
| 0.1416 | 3.0 | 68223 | 0.3752 | 0.9084 | 0.8768 | 0.8926 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 393 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6714801444043321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7260
- Accuracy: 0.6715
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6915 | 1.0 | 156 | 0.6491 | 0.6606 |
| 0.55 | 2.0 | 312 | 0.6737 | 0.6570 |
| 0.3955 | 3.0 | 468 | 0.7260 | 0.6715 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-xlarge-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 91 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-sst2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3649
- Accuracy: 0.9232
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.233 | 1.0 | 4210 | 0.9174 | 0.2841 |
| 0.1261 | 2.0 | 8420 | 0.9278 | 0.3310 |
| 0.0768 | 3.0 | 12630 | 0.9232 | 0.3649 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/albert-xxlarge-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert-base-cased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8897907271421561
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-stsb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4861
- Pearson: 0.8926
- Spearmanr: 0.8898
- Combined Score: 0.8912
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:|
| 1.1174 | 1.0 | 360 | 0.8816 | 0.5000 | 0.8832 | 0.8800 |
| 0.3835 | 2.0 | 720 | 0.8901 | 0.4672 | 0.8915 | 0.8888 |
| 0.2388 | 3.0 | 1080 | 0.8912 | 0.4861 | 0.8926 | 0.8898 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4647887323943662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-wnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6996
- Accuracy: 0.4648
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir bert-base-cased-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7299 | 1.0 | 40 | 0.6923 | 0.5634 |
| 0.6982 | 2.0 | 80 | 0.7027 | 0.3803 |
| 0.6972 | 3.0 | 120 | 0.7005 | 0.4507 |
| 0.6992 | 4.0 | 160 | 0.6977 | 0.5352 |
| 0.699 | 5.0 | 200 | 0.6996 | 0.4648 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-ner | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 81 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-large-cased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5957317644481708
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-cola
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5533 | 1.0 | 2138 | 0.7943 | 0.4439 |
| 0.5004 | 2.0 | 4276 | 0.7272 | 0.5678 |
| 0.2865 | 3.0 | 6414 | 0.8385 | 0.5957 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-cased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6274
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6441 | 1.0 | 917 | 0.6370 | 0.6838 | 0.8122 | 0.7480 |
| 0.6451 | 2.0 | 1834 | 0.6553 | 0.6838 | 0.8122 | 0.7480 |
| 0.6428 | 3.0 | 2751 | 0.6332 | 0.6838 | 0.8122 | 0.7480 |
| 0.6476 | 4.0 | 3668 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.6499 | 5.0 | 4585 | 0.6274 | 0.6838 | 0.8122 | 0.7480 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6642599277978339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-rte
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5187
- Accuracy: 0.6643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6969 | 1.0 | 623 | 0.7039 | 0.5343 |
| 0.5903 | 2.0 | 1246 | 0.6461 | 0.7184 |
| 0.4557 | 3.0 | 1869 | 1.5187 | 0.6643 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-09-23T04:24:07Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-large-cased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.352112676056338
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-finetuned-wnli
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7087
- Accuracy: 0.3521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| 0.7114 | 1.0 | 159 | 0.5634 | 0.6923 |
| 0.7141 | 2.0 | 318 | 0.5634 | 0.6895 |
| 0.7063 | 3.0 | 477 | 0.5634 | 0.6930 |
| 0.712 | 4.0 | 636 | 0.4507 | 0.7077 |
| 0.7037 | 5.0 | 795 | 0.3521 | 0.7087 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.35940659235571387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-cola
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5929
- Matthews Correlation: 0.3594
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name cola \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-cola \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5895 | 1.0 | 535 | 0.6146 | 0.1699 |
| 0.4656 | 2.0 | 1070 | 0.5667 | 0.3047 |
| 0.3329 | 3.0 | 1605 | 0.5929 | 0.3594 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 39 | "2021-09-17T07:11:04Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7674938974776241
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-mnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6443
- Accuracy: 0.7675
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7143 | 1.0 | 24544 | 0.6169 | 0.7504 |
| 0.5407 | 2.0 | 49088 | 0.6218 | 0.7627 |
| 0.4178 | 3.0 | 73632 | 0.6564 | 0.7658 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-09-16T17:30:22Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7720588235294118
- name: F1
type: f1
value: 0.8502415458937198
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-mrpc
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9653
- Accuracy: 0.7721
- F1: 0.8502
- Combined Score: 0.8112
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name mrpc \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-mrpc \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.544 | 1.0 | 230 | 0.5272 | 0.7328 | 0.8300 | 0.7814 |
| 0.4034 | 2.0 | 460 | 0.6211 | 0.7255 | 0.8298 | 0.7776 |
| 0.2602 | 3.0 | 690 | 0.9110 | 0.7230 | 0.8306 | 0.7768 |
| 0.1688 | 4.0 | 920 | 0.8640 | 0.7696 | 0.8489 | 0.8092 |
| 0.0913 | 5.0 | 1150 | 0.9653 | 0.7721 | 0.8502 | 0.8112 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | "2021-09-17T18:09:22Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8438586857038257
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-qnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4746
- Accuracy: 0.8439
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4597 | 1.0 | 6547 | 0.3713 | 0.8411 |
| 0.3252 | 2.0 | 13094 | 0.3781 | 0.8420 |
| 0.2243 | 3.0 | 19641 | 0.4746 | 0.8439 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-09-18T18:23:31Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-base-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8847390551570616
- name: F1
type: f1
value: 0.8466197090382463
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-qqp
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3686
- Accuracy: 0.8847
- F1: 0.8466
- Combined Score: 0.8657
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name qqp \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-qqp \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3484 | 1.0 | 22741 | 0.3014 | 0.8676 | 0.8297 | 0.8487 |
| 0.2387 | 2.0 | 45482 | 0.3011 | 0.8801 | 0.8429 | 0.8615 |
| 0.1739 | 3.0 | 68223 | 0.3686 | 0.8847 | 0.8466 | 0.8657 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-09-19T05:47:14Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.628158844765343
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-rte
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6978
- Accuracy: 0.6282
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6829 | 1.0 | 156 | 0.6657 | 0.5704 |
| 0.6174 | 2.0 | 312 | 0.6784 | 0.6101 |
| 0.5141 | 3.0 | 468 | 0.6978 | 0.6282 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | "2021-09-19T08:32:11Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-sst2
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4674
- Accuracy: 0.8945
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.2956 | 1.0 | 4210 | 0.8819 | 0.3128 |
| 0.1746 | 2.0 | 8420 | 0.8979 | 0.3850 |
| 0.1204 | 3.0 | 12630 | 0.8945 | 0.4674 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-base-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8219397497728022
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-stsb
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7894
- Pearson: 0.8256
- Spearmanr: 0.8219
- Combined Score: 0.8238
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name stsb \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-stsb \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Combined Score | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:--------------:|:---------------:|:-------:|:---------:|
| 1.5473 | 1.0 | 360 | 0.8120 | 0.7751 | 0.8115 | 0.8125 |
| 0.6954 | 2.0 | 720 | 0.8145 | 0.8717 | 0.8160 | 0.8130 |
| 0.4828 | 3.0 | 1080 | 0.8238 | 0.7894 | 0.8256 | 0.8219 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5492957746478874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-wnli
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6887
- Accuracy: 0.5493
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name wnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 5 \\n --output_dir fnet-base-finetuned-wnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7052 | 1.0 | 40 | 0.6902 | 0.5634 |
| 0.6957 | 2.0 | 80 | 0.7013 | 0.4366 |
| 0.6898 | 3.0 | 120 | 0.6898 | 0.5352 |
| 0.6958 | 4.0 | 160 | 0.6874 | 0.5634 |
| 0.6982 | 5.0 | 200 | 0.6887 | 0.5493 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | "2021-10-09T18:55:55Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-pos | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | "2021-10-10T05:51:58Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6192 | 1.0 | 2138 | 0.6443 | 0.0 |
| 0.6177 | 2.0 | 4276 | 0.6296 | 0.0 |
| 0.6128 | 3.0 | 6414 | 0.6173 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy3
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6554
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6408 | 1.0 | 2138 | 0.7329 | 0.0 |
| 0.6589 | 2.0 | 4276 | 0.6311 | 0.0 |
| 0.6467 | 3.0 | 6414 | 0.6554 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola-copy4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola-copy4
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6500
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6345 | 1.0 | 2138 | 0.6611 | 0.0 |
| 0.6359 | 2.0 | 4276 | 0.6840 | 0.0 |
| 0.6331 | 3.0 | 6414 | 0.6500 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-1 | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | "2021-09-23T07:49:09Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: fnet-large-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-cola
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6243
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 |
| 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 |
| 0.616 | 3.0 | 6414 | 0.6243 | 0.0 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8798646362098139
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-mrpc
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0872
- Accuracy: 0.8260
- F1: 0.8799
- Combined Score: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5656 | 1.0 | 917 | 0.6999 | 0.7843 | 0.8581 | 0.8212 |
| 0.3874 | 2.0 | 1834 | 0.7280 | 0.8088 | 0.8691 | 0.8390 |
| 0.1627 | 3.0 | 2751 | 1.1274 | 0.8162 | 0.8780 | 0.8471 |
| 0.0751 | 4.0 | 3668 | 1.0289 | 0.8333 | 0.8870 | 0.8602 |
| 0.0339 | 5.0 | 4585 | 1.0872 | 0.8260 | 0.8799 | 0.8529 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Certified-Zoomer/DialoGPT-small-rick | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2021-10-09T08:47:27Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: fnet-large-finetuned-qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8943111550828593
- name: F1
type: f1
value: 0.8556565212985171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-qqp
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5515
- Accuracy: 0.8943
- F1: 0.8557
- Combined Score: 0.8750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:|
| 0.4574 | 1.0 | 90962 | 0.4946 | 0.8694 | 0.8297 | 0.8496 |
| 0.3387 | 2.0 | 181924 | 0.4745 | 0.8874 | 0.8437 | 0.8655 |
| 0.2029 | 3.0 | 272886 | 0.5515 | 0.8943 | 0.8557 | 0.8750 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Chaddmckay/Cdm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6425992779783394
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-rte
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7528
- Accuracy: 0.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7105 | 1.0 | 623 | 0.6887 | 0.5740 |
| 0.6714 | 2.0 | 1246 | 0.6742 | 0.6209 |
| 0.509 | 3.0 | 1869 | 0.7528 | 0.6426 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Chae/botman | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9048165137614679
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-sst2
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5240
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.394 | 1.0 | 16838 | 0.3896 | 0.8968 |
| 0.2076 | 2.0 | 33676 | 0.5100 | 0.8956 |
| 0.1148 | 3.0 | 50514 | 0.5240 | 0.9048 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Chaewon/mmnt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | "2021-10-07T16:55:55Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: fnet-large-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8532669137129205
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-stsb
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6250
- Pearson: 0.8554
- Spearmanr: 0.8533
- Combined Score: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.0727 | 1.0 | 1438 | 0.7718 | 0.8187 | 0.8240 | 0.8214 |
| 0.4619 | 2.0 | 2876 | 0.7704 | 0.8472 | 0.8500 | 0.8486 |
| 0.2401 | 3.0 | 4314 | 0.6250 | 0.8554 | 0.8533 | 0.8543 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
Chaewon/mnmt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | "2021-09-23T05:28:41Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-large-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.38028169014084506
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-large-finetuned-wnli
This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6953
- Accuracy: 0.3803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7217 | 1.0 | 159 | 0.6864 | 0.5634 |
| 0.7056 | 2.0 | 318 | 0.6869 | 0.5634 |
| 0.706 | 3.0 | 477 | 0.6875 | 0.5634 |
| 0.7032 | 4.0 | 636 | 0.6931 | 0.5634 |
| 0.7025 | 5.0 | 795 | 0.6953 | 0.3803 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
chainyo/speaker-recognition-meetup | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | "2021-03-26T16:44:09Z" | ---
language: cnh
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Hakha Chin by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice cnh
type: common_voice
args: cnh
metrics:
- name: Test WER
type: wer
value: 31.38
---
# Wav2Vec2-Large-XLSR-53-Hakha-Chin
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hakha Chin using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "cnh", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-cnh")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-cnh/")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "cnh", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-cnh")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-cnh")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\/]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 31.38 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1pejk9gv9vMcUOjyVQ_vsV2ngW4NiWLWy?usp=sharing). |
ChaitanyaU/FineTuneLM | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2021-03-27T16:56:10Z" | ---
language: eo
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Esperanto by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice eo
type: common_voice
args: eo
metrics:
- name: Test WER
type: wer
value: 10.13
---
# Wav2Vec2-Large-XLSR-53-Esperanto
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "eo", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-eo')
model.to("cuda")
chars_to_ignore_regex = """[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\„\\\\\\\\«\\\\\\\\(\\\\\\\\»\\\\\\\\)\\\\\\\\’\\\\\\\\']"""
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace('—',' ').replace('–',' ')
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=5000)))
```
**Test Result**: 10.13 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-esperanto-asr-with-transformers-final.ipynb). |
Chakita/Friends | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: gu
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Gujarati by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR gu
type: openslr
metrics:
- name: Test WER
type: wer
value: 23.55
---
# Wav2Vec2-Large-XLSR-53-Gujarati
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Gujarati using the [OpenSLR SLR78](http://openslr.org/78/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Gujarati `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET.
# For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset_eval = test_dataset_eval.map(speech_file_to_array_fn)
inputs = processor(test_dataset_eval["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset_eval["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-gu")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…\'\_\’]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 23.55 %
## Training
90% of the OpenSLR Gujarati Male+Female dataset was used for training, after removing few examples that contained Roman characters.
The colab notebook used for training can be found [here](https://colab.research.google.com/drive/1fRQlgl4EPR4qKGScgza3MpWgbL5BeWtn?usp=sharing).
|
Chakita/KNUBert | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | "2021-03-24T19:47:37Z" | ---
language: hu
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Hungarian by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hu
type: common_voice
args: hu
metrics:
- name: Test WER
type: wer
value: 46.75
---
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-hu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-hu")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-hu")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-hu")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.75 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-hungarian-asr.ipynb). The notebook containing the code used for evaluation can be found [here](https://colab.research.google.com/drive/1esYvWS6IkTQFfRqi_b6lAJEycuecInHE?usp=sharing). |
Chakita/KROBERT | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ia
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Interlingua by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ia
type: common_voice
args: ia
metrics:
- name: Test WER
type: wer
value: 25.09
---
# Wav2Vec2-Large-XLSR-53-Interlingua
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Interlingua using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ia", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-ia")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-ia")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ia", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-ia")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-ia")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.09 %
## Training
The Common Voice `train` and `validation` datasets were used for training for 4000 steps due to GPU timeout. The results are based on the 4000 steps checkpoint. There is a good chance that full training will lead to better results.
The colab notebook used can be found [here](https://colab.research.google.com/drive/1nbqvVwS8DTNrCzzh3vgrN55qxgoqbita?usp=sharing) and the evaluation can be found [here](https://colab.research.google.com/drive/18pCWBwNNUMUYV1FiqT_0EsTbCfwwe7ms?usp=sharing). |
Chakita/Kalbert | [
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: it
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Italian by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice it
type: common_voice
args: it
metrics:
- name: Test WER
type: wer
value: 11.49
---
# Wav2Vec2-Large-XLSR-53-Italian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "it", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unicodedata
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
allowed_characters = [
" ",
"'",
'a',
'b',
'c',
'd',
'e',
'f',
'g',
'h',
'i',
'j',
'k',
'l',
'm',
'n',
'o',
'p',
'q',
'r',
's',
't',
'u',
'v',
'w',
'x',
'y',
'z',
'à',
'á',
'è',
'é',
'ì',
'í',
'ò',
'ó',
'ù',
'ú',
]
def remove_accents(input_str):
if input_str in allowed_characters:
return input_str
if input_str == 'ø':
return 'o'
elif input_str=='ß' or input_str =='ß':
return 'b'
elif input_str=='ё':
return 'e'
elif input_str=='đ':
return 'd'
nfkd_form = unicodedata.normalize('NFKD', input_str)
only_ascii = nfkd_form.encode('ASCII', 'ignore').decode()
if only_ascii is None or only_ascii=='':
return input_str
else:
return only_ascii
def fix_accents(sentence):
new_sentence=''
for char in sentence:
new_sentence+=remove_accents(char)
return new_sentence
test_dataset = load_dataset("common_voice", "it", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model = Wav2Vec2ForCTC.from_pretrained('gchhablani/wav2vec2-large-xlsr-it')
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_remove= [",", "?", ".", "!", "-", ";", ":", '""', "%", '"', "�",'ʿ','“','”','(','=','`','_','+','«','<','>','~','…','«','»','–','\[','\]','°','̇','´','ʾ','„','̇','̇','̇','¡'] # All extra characters
chars_to_remove_regex = f'[{"".join(chars_to_remove)}]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower().replace('‘',"'").replace('ʻ',"'").replace('ʼ',"'").replace('’',"'").replace('ʹ',"''").replace('̇','')
batch["sentence"] = fix_accents(batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=5000)))
```
**Test Result**: 11.49 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://github.com/gchhablani/wav2vec2-week/blob/main/fine-tune-xlsr-wav2vec2-on-italian-asr-with-transformers_final.ipynb). |
Chakita/KannadaBERT | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | "2021-03-25T20:55:44Z" | ---
language: mr
datasets:
- interspeech_2021_asr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Marathi 2 by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: InterSpeech 2021 ASR mr
type: interspeech_2021_asr
metrics:
- name: Test WER
type: wer
value: 14.53
---
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using a part of the [InterSpeech 2021 Marathi](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-2")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-2")
resampler = torchaudio.transforms.Resample(8_000, 16_000) # The original data was with 8,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the test set of the Marathi data on InterSpeech-2021.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-2")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-2")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(8_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 19.98 % (555 examples from test set were used for evaluation)
**Test Result on 10% of OpenSLR74 data**: 64.64 %
## Training
5000 examples of the InterSpeech Marathi dataset were used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/drive/1sIwGOLJPQqhKm_wVZDkzRuoJqAEgArFr?usp=sharing).
|
Chakita/gpt2_mwp | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | "2021-03-26T00:56:33Z" | ---
language: mr
datasets:
- openslr
- interspeech_2021_asr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Marathi by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR mr, InterSpeech 2021 ASR mr
type: openslr, interspeech_2021_asr
metrics:
- name: Test WER
type: wer
value: 19.05
---
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [OpenSLR SLR64](http://openslr.org/64/) dataset and [InterSpeech 2021](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html) Marathi datasets. Note that this data OpenSLR contains only female voices. Please keep this in mind before using the model for your task. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi `text` and `audio_path` fields:
```python
import torch
import torchaudio
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000) # sampling_rate can vary
return batch
test_data= test_data.map(speech_file_to_array_fn)
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_data["text"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
```python
import torch
import torchaudio
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_data = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr-3")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["text"] = re.sub(chars_to_ignore_regex, '', batch["text"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech"] = librosa.resample(speech_array[0].numpy(), sampling_rate, 16_000)
return batch
test_data= test_data.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_data.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["text"])))
```
**Test Result**: 19.05 % (157+157 examples)
**Test Result on OpenSLR test**: 14.15 % (157 examples)
**Test Results on InterSpeech test**: 27.14 % (157 examples)
## Training
1412 examples of the OpenSLR Marathi dataset and 1412 examples of InterSpeech 2021 Marathi ASR dataset were used for training. For testing, 157 examples from each were used.
The colab notebook used for training and evaluation can be found [here](https://colab.research.google.com/drive/15fUhb4bUFFGJyNLr-_alvPxVX4w0YXRu?usp=sharing).
|
Chalponkey/DialoGPT-small-Barry | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
language: mr
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Marathi by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR mr
type: openslr
metrics:
- name: Test WER
type: wer
value: 14.53
---
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [OpenSLR SLR64](http://openslr.org/64/) dataset. Note that this data contains only female voices. Please keep this in mind before using the model for your task, although it works very well for male voice too. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Marathi data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-mr")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 14.53 %
## Training
90% of the OpenSLR Marathi dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/drive/1_BbLyLqDUsXG3RpSULfLRjC6UY3RjwME?usp=sharing).
|
Champion/test_upload_vox2_wavlm_epoch8 | [
"sidekit",
"audio"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: or
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Odia by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice or
type: common_voice
args: or
metrics:
- name: Test WER
type: wer
value: 52.64
---
# Wav2Vec2-Large-XLSR-53-Odia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Odia using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-or")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…\'\_\’\।\|]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.64 %
## Training
The Common Voice `train` and `validation` datasets were used for training.The colab notebook used can be found [here](https://colab.research.google.com/drive/1s8DrwgB5y4Z7xXIrPXo1rQA5_1OZ8WD5?usp=sharing). |
Chan/distilgpt2-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2021-03-23T13:56:09Z" | ---
language: pt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Portugese by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice pt
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 17.22
---
# Wav2Vec2-Large-XLSR-53-Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-pt")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\;\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 17.22 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://github.com/jqueguiner/wav2vec2-sprint/blob/main/run_common_voice.py).
The parameters passed were:
```bash
#!/usr/bin/env bash
python run_common_voice.py \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="pt" \
--output_dir=/workspace/output_models/pt/wav2vec2-large-xlsr-pt \
--cache_dir=/workspace/data \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--evaluation_strategy="steps" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--fp16 \
--freeze_feature_extractor \
--save_steps="500" \
--eval_steps="500" \
--save_total_limit="1" \
--logging_steps="500" \
--group_by_length \
--feat_proj_dropout="0.0" \
--layerdrop="0.1" \
--gradient_checkpointing \
--do_train --do_eval \
```
Notebook containing the evaluation can be found [here](https://colab.research.google.com/drive/14e-zNK_5pm8EMY9EbeZerpHx7WsGycqG?usp=sharing). |
Chan/distilroberta-base-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2021-03-27T14:23:51Z" | ---
language: rm-sursilv
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Wav2Vec2 Large 53 Romansh Sursilvan by Gunjan Chhablani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice rm-sursilv
type: common_voice
args: rm-sursilv
metrics:
- name: Test WER
type: wer
value: 25.16
---
# Wav2Vec2-Large-XLSR-53-Romansh-Sursilvan
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Romansh Sursilvan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Portuguese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "rm-sursilv", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model = Wav2Vec2ForCTC.from_pretrained("gchhablani/wav2vec2-large-xlsr-rm-sursilv")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\…\\«\\»\\–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.16 %
## Training
The Common Voice `train` and `validation` datasets were used for training. The code can be found [here](https://colab.research.google.com/drive/1dpZr_GzRowCciUbzM3GnW04TNKnB7vrP?usp=sharing). |
Cheapestmedsshop/Buymodafinilus | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | "2021-07-28T12:51:00Z" | ---
language: el
---
# GreekSocialBERT
## Model description
A Greek language model based on [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1)
## Training data
The training data is a corpus of 458,293 documents collected from Greek social media accounts.
The training corpus has been collected and provided by [Palo LTD](http://www.paloservices.com/)
## Eval results
### BibTeX entry and citation info
```bibtex
@Article{info12080331,
AUTHOR = {Alexandridis, Georgios and Varlamis, Iraklis and Korovesis, Konstantinos and Caridakis, George and Tsantilas, Panagiotis},
TITLE = {A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media},
JOURNAL = {Information},
VOLUME = {12},
YEAR = {2021},
NUMBER = {8},
ARTICLE-NUMBER = {331},
URL = {https://www.mdpi.com/2078-2489/12/8/331},
ISSN = {2078-2489},
DOI = {10.3390/info12080331}
}
```
|
Cheatham/xlm-roberta-base-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
language: el
---
# PaloBERT
## Model description
A Greek language model based on [RoBERTa](https://arxiv.org/abs/1907.11692)
## Training data
The training data is a corpus of 458,293 documents collected from Greek social media accounts. It also contains a GTP-2 tokenizer trained from scratch on the same corpus.
The training corpus has been collected and provided by [Palo LTD](http://www.paloservices.com/)
## Eval results
### BibTeX entry and citation info
```bibtex
@Article{info12080331,
AUTHOR = {Alexandridis, Georgios and Varlamis, Iraklis and Korovesis, Konstantinos and Caridakis, George and Tsantilas, Panagiotis},
TITLE = {A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media},
JOURNAL = {Information},
VOLUME = {12},
YEAR = {2021},
NUMBER = {8},
ARTICLE-NUMBER = {331},
URL = {https://www.mdpi.com/2078-2489/12/8/331},
ISSN = {2078-2489},
DOI = {10.3390/info12080331}
}
```
|
Check/vaw2tmp | [
"tensorboard"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/vsr/base_vox_433h.pt |
D3xter1922/electra-base-discriminator-finetuned-mnli | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | A fake news detector using RoBERTa.
Dataset: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
Training involved using hyperparameter search with 10 trials. |
DJSammy/bert-base-danish-uncased_BotXO-ai | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | This repository belongs to TransportersBERT from ActTrans publication.
Taju, Semmy Wellem, Syed Muazzam Ali Shah, and Yu-Yen Ou. “ActTRANS: Functional Classification in Active Transport Proteins Based on Transfer Learning and Contextual Representations.” Computational Biology and Chemistry 93 (August 1, 2021): 107537. https://doi.org/10.1016/j.compbiolchem.2021.107537.
|
DTAI-KULeuven/robbertje-1-gb-shuffled | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: bn
tags:
- text-generation
widget:
- text: তোমাকে দেখেছি আমার হৃদয় মাঝে
---
# Robi Kobi
### Created by [Ritobrata Ghosh](https://ghosh-r.github.io)
A model that writes Bengali poems in the style of Nobel Laureate poet Rabindranath Tagore.
This model is fine-tuned on 1,400+ poems written by Rabindranath Tagore. This model leverages the [Bangla GPT-2](https://huggingface.co/ghosh-r/bangla-gpt2) pretrained model, trained on mc4-Bengali dataset. |
Daiki/scibert_scivocab_uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- ro
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
- gigant/romanian_speech_synthesis_0_8_1
model-index:
- name: wav2vec2-ro-300m_01
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event
type: speech-recognition-community-v2/dev_data
args: ro
metrics:
- name: Dev WER (without LM)
type: wer
value: 46.99
- name: Dev CER (without LM)
type: cer
value: 16.04
- name: Dev WER (with LM)
type: wer
value: 38.63
- name: Dev CER (with LM)
type: cer
value: 14.52
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: mozilla-foundation/common_voice_8_0
args: ro
metrics:
- name: Test WER (without LM)
type: wer
value: 11.73
- name: Test CER (without LM)
type: cer
value: 2.93
- name: Test WER (with LM)
type: wer
value: 7.31
- name: Test CER (with LM)
type: cer
value: 2.17
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ro
metrics:
- name: Test WER
type: wer
value: 43.23
---
You can test this model online with the [**Space for Romanian Speech Recognition**](https://huggingface.co/spaces/gigant/romanian-speech-recognition)
The model ranked **TOP-1** on Romanian Speech Recognition during HuggingFace's Robust Speech Challenge :
* [**The 🤗 Speech Bench**](https://huggingface.co/spaces/huggingface/hf-speech-bench)
* [**Speech Challenge Leaderboard**](https://huggingface.co/spaces/speech-recognition-community-v2/FinalLeaderboard)
# Romanian Wav2Vec2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset, with extra training data from [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset.
Without the 5-gram Language Model optimization, it achieves the following results on the evaluation set (Common Voice 8.0, Romanian subset, test split):
- Loss: 0.1553
- Wer: 0.1174
- Cer: 0.0294
## Model description
The architecture is based on [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) with a speech recognition CTC head and an added 5-gram language model (using [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and [kenlm](https://github.com/kpu/kenlm)) trained on the [Romanian Corpora Parliament](gigant/ro_corpora_parliament_processed) dataset. Those libraries are needed in order for the language model-boosted decoder to work.
## Intended uses & limitations
The model is made for speech recognition in Romanian from audio clips sampled at **16kHz**. The predicted text is lowercased and does not contain any punctuation.
## How to use
Make sure you have installed the correct dependencies for the language model-boosted version to work. You can just run this command to install the `kenlm` and `pyctcdecode` libraries :
```pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode```
With the framework `transformers` you can load the model with the following code :
```
from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("gigant/romanian-wav2vec2")
model = AutoModelForCTC.from_pretrained("gigant/romanian-wav2vec2")
```
Or, if you want to test the model, you can load the automatic speech recognition pipeline from `transformers` with :
```
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="gigant/romanian-wav2vec2")
```
## Example use with the `datasets` library
First, you need to load your data
We will use the [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset in this example.
```
from datasets import load_dataset
dataset = load_dataset("gigant/romanian_speech_synthesis_0_8_1")
```
You can listen to the samples with the `IPython.display` library :
```
from IPython.display import Audio
i = 0
sample = dataset["train"][i]
Audio(sample["audio"]["array"], rate = sample["audio"]["sampling_rate"])
```
The model is trained to work with audio sampled at 16kHz, so if the sampling rate of the audio in the dataset is different, we will have to resample it.
In the example, the audio is sampled at 48kHz. We can see this by checking `dataset["train"][0]["audio"]["sampling_rate"]`
The following code resample the audio using the `torchaudio` library :
```
import torchaudio
import torch
i = 0
audio = sample["audio"]["array"]
rate = sample["audio"]["sampling_rate"]
resampler = torchaudio.transforms.Resample(rate, 16_000)
audio_16 = resampler(torch.Tensor(audio)).numpy()
```
To listen to the resampled sample :
```
Audio(audio_16, rate=16000)
```
Know you can get the model prediction by running
```
predicted_text = asr(audio_16)
ground_truth = dataset["train"][i]["sentence"]
print(f"Predicted text : {predicted_text}")
print(f"Ground truth : {ground_truth}")
```
## Training and evaluation data
Training data :
- [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : train + validation + other splits
- [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) : train + test splits
Evaluation data :
- [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : test split
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.9272 | 0.78 | 500 | 0.7603 | 0.7734 | 0.2355 |
| 0.6157 | 1.55 | 1000 | 0.4003 | 0.4866 | 0.1247 |
| 0.4452 | 2.33 | 1500 | 0.2960 | 0.3689 | 0.0910 |
| 0.3631 | 3.11 | 2000 | 0.2580 | 0.3205 | 0.0796 |
| 0.3153 | 3.88 | 2500 | 0.2465 | 0.2977 | 0.0747 |
| 0.2795 | 4.66 | 3000 | 0.2274 | 0.2789 | 0.0694 |
| 0.2615 | 5.43 | 3500 | 0.2277 | 0.2685 | 0.0675 |
| 0.2389 | 6.21 | 4000 | 0.2135 | 0.2518 | 0.0627 |
| 0.2229 | 6.99 | 4500 | 0.2054 | 0.2449 | 0.0614 |
| 0.2067 | 7.76 | 5000 | 0.2096 | 0.2378 | 0.0597 |
| 0.1977 | 8.54 | 5500 | 0.2042 | 0.2387 | 0.0600 |
| 0.1896 | 9.32 | 6000 | 0.2110 | 0.2383 | 0.0595 |
| 0.1801 | 10.09 | 6500 | 0.1909 | 0.2165 | 0.0548 |
| 0.174 | 10.87 | 7000 | 0.1883 | 0.2206 | 0.0559 |
| 0.1685 | 11.65 | 7500 | 0.1848 | 0.2097 | 0.0528 |
| 0.1591 | 12.42 | 8000 | 0.1851 | 0.2039 | 0.0514 |
| 0.1537 | 13.2 | 8500 | 0.1881 | 0.2065 | 0.0518 |
| 0.1504 | 13.97 | 9000 | 0.1840 | 0.1972 | 0.0499 |
| 0.145 | 14.75 | 9500 | 0.1845 | 0.2029 | 0.0517 |
| 0.1417 | 15.53 | 10000 | 0.1884 | 0.2003 | 0.0507 |
| 0.1364 | 16.3 | 10500 | 0.2010 | 0.2037 | 0.0517 |
| 0.1331 | 17.08 | 11000 | 0.1838 | 0.1923 | 0.0483 |
| 0.129 | 17.86 | 11500 | 0.1818 | 0.1922 | 0.0489 |
| 0.1198 | 18.63 | 12000 | 0.1760 | 0.1861 | 0.0465 |
| 0.1203 | 19.41 | 12500 | 0.1686 | 0.1839 | 0.0465 |
| 0.1225 | 20.19 | 13000 | 0.1828 | 0.1920 | 0.0479 |
| 0.1145 | 20.96 | 13500 | 0.1673 | 0.1784 | 0.0446 |
| 0.1053 | 21.74 | 14000 | 0.1802 | 0.1810 | 0.0456 |
| 0.1071 | 22.51 | 14500 | 0.1769 | 0.1775 | 0.0444 |
| 0.1053 | 23.29 | 15000 | 0.1920 | 0.1783 | 0.0457 |
| 0.1024 | 24.07 | 15500 | 0.1904 | 0.1775 | 0.0446 |
| 0.0987 | 24.84 | 16000 | 0.1793 | 0.1762 | 0.0446 |
| 0.0949 | 25.62 | 16500 | 0.1801 | 0.1766 | 0.0443 |
| 0.0942 | 26.4 | 17000 | 0.1731 | 0.1659 | 0.0423 |
| 0.0906 | 27.17 | 17500 | 0.1776 | 0.1698 | 0.0424 |
| 0.0861 | 27.95 | 18000 | 0.1716 | 0.1600 | 0.0406 |
| 0.0851 | 28.73 | 18500 | 0.1662 | 0.1630 | 0.0410 |
| 0.0844 | 29.5 | 19000 | 0.1671 | 0.1572 | 0.0393 |
| 0.0792 | 30.28 | 19500 | 0.1768 | 0.1599 | 0.0407 |
| 0.0798 | 31.06 | 20000 | 0.1732 | 0.1558 | 0.0394 |
| 0.0779 | 31.83 | 20500 | 0.1694 | 0.1544 | 0.0388 |
| 0.0718 | 32.61 | 21000 | 0.1709 | 0.1578 | 0.0399 |
| 0.0732 | 33.38 | 21500 | 0.1697 | 0.1523 | 0.0391 |
| 0.0708 | 34.16 | 22000 | 0.1616 | 0.1474 | 0.0375 |
| 0.0678 | 34.94 | 22500 | 0.1698 | 0.1474 | 0.0375 |
| 0.0642 | 35.71 | 23000 | 0.1681 | 0.1459 | 0.0369 |
| 0.0661 | 36.49 | 23500 | 0.1612 | 0.1411 | 0.0357 |
| 0.0629 | 37.27 | 24000 | 0.1662 | 0.1414 | 0.0355 |
| 0.0587 | 38.04 | 24500 | 0.1659 | 0.1408 | 0.0351 |
| 0.0581 | 38.82 | 25000 | 0.1612 | 0.1382 | 0.0352 |
| 0.0556 | 39.6 | 25500 | 0.1647 | 0.1376 | 0.0345 |
| 0.0543 | 40.37 | 26000 | 0.1658 | 0.1335 | 0.0337 |
| 0.052 | 41.15 | 26500 | 0.1716 | 0.1369 | 0.0343 |
| 0.0513 | 41.92 | 27000 | 0.1600 | 0.1317 | 0.0330 |
| 0.0491 | 42.7 | 27500 | 0.1671 | 0.1311 | 0.0328 |
| 0.0463 | 43.48 | 28000 | 0.1613 | 0.1289 | 0.0324 |
| 0.0468 | 44.25 | 28500 | 0.1599 | 0.1260 | 0.0315 |
| 0.0435 | 45.03 | 29000 | 0.1556 | 0.1232 | 0.0308 |
| 0.043 | 45.81 | 29500 | 0.1588 | 0.1240 | 0.0309 |
| 0.0421 | 46.58 | 30000 | 0.1567 | 0.1217 | 0.0308 |
| 0.04 | 47.36 | 30500 | 0.1533 | 0.1198 | 0.0302 |
| 0.0389 | 48.14 | 31000 | 0.1582 | 0.1185 | 0.0297 |
| 0.0387 | 48.91 | 31500 | 0.1576 | 0.1187 | 0.0297 |
| 0.0376 | 49.69 | 32000 | 0.1560 | 0.1182 | 0.0295 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
- pyctcdecode 0.3.0
- kenlm
|
Subsets and Splits