modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
dccuchile/albert-xlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_3_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_3_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0585
- F1: 0.7952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5738 | 0.7709 |
| 0.5512 | 2.0 | 578 | 0.5828 | 0.7733 |
| 0.5512 | 3.0 | 867 | 0.7217 | 0.7830 |
| 0.2304 | 4.0 | 1156 | 1.0389 | 0.7867 |
| 0.2304 | 5.0 | 1445 | 1.0992 | 0.7915 |
| 0.0951 | 6.0 | 1734 | 1.3528 | 0.7806 |
| 0.0388 | 7.0 | 2023 | 1.4223 | 0.7879 |
| 0.0388 | 8.0 | 2312 | 1.5588 | 0.7830 |
| 0.0172 | 9.0 | 2601 | 1.5913 | 0.7976 |
| 0.0172 | 10.0 | 2890 | 1.7464 | 0.7842 |
| 0.0143 | 11.0 | 3179 | 1.7395 | 0.7927 |
| 0.0143 | 12.0 | 3468 | 1.7523 | 0.7939 |
| 0.0108 | 13.0 | 3757 | 1.8059 | 0.7952 |
| 0.0099 | 14.0 | 4046 | 1.9056 | 0.7855 |
| 0.0099 | 15.0 | 4335 | 1.8550 | 0.7903 |
| 0.0076 | 16.0 | 4624 | 1.8718 | 0.7988 |
| 0.0076 | 17.0 | 4913 | 1.9325 | 0.7976 |
| 0.0033 | 18.0 | 5202 | 1.9504 | 0.7952 |
| 0.0033 | 19.0 | 5491 | 1.9841 | 0.7879 |
| 0.003 | 20.0 | 5780 | 1.9843 | 0.7952 |
| 0.0001 | 21.0 | 6069 | 2.0110 | 0.7927 |
| 0.0001 | 22.0 | 6358 | 2.0049 | 0.7939 |
| 0.0028 | 23.0 | 6647 | 2.0638 | 0.7915 |
| 0.0028 | 24.0 | 6936 | 2.0612 | 0.7903 |
| 0.0011 | 25.0 | 7225 | 2.0585 | 0.7952 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dccuchile/albert-xlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
datasets:
- wnut2017
metrics:
- f1
- precision
- recall
model-index:
- name: tner/roberta-large-wnut2017
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut2017
type: wnut2017
args: wnut2017
metrics:
- name: F1
type: f1
value: 0.5375139977603584
- name: Precision
type: precision
value: 0.6789250353606789
- name: Recall
type: recall
value: 0.4448563484708063
- name: F1 (macro)
type: f1_macro
value: 0.4734480458244917
- name: Precision (macro)
type: precision_macro
value: 0.59471614080646
- name: Recall (macro)
type: recall_macro
value: 0.4020936892146829
- name: F1 (entity span)
type: f1_entity_span
value: 0.6304591265397536
- name: Precision (entity span)
type: precision_entity_span
value: 0.7963224893917963
- name: Recall (entity span)
type: recall_entity_span
value: 0.5217794253938832
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/roberta-large-wnut2017
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5375139977603584
- Precision (micro): 0.6789250353606789
- Recall (micro): 0.4448563484708063
- F1 (macro): 0.4734480458244917
- Precision (macro): 0.59471614080646
- Recall (macro): 0.4020936892146829
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.4065040650406504
- group: 0.33913043478260874
- location: 0.6715867158671587
- person: 0.6657342657342658
- product: 0.27999999999999997
- work_of_art: 0.4777327935222672
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.5084441265818846, 0.5659035599952082]
- 95%: [0.5009032784561068, 0.5708361009044657]
- F1 (macro):
- 90%: [0.5084441265818846, 0.5659035599952082]
- 95%: [0.5009032784561068, 0.5708361009044657]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/roberta-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: roberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 64
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
datasets:
- tner/wnut2017
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-v3-large-wnut2017
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/wnut2017
type: tner/wnut2017
args: tner/wnut2017
metrics:
- name: F1
type: f1
value: 0.5047353760445682
- name: Precision
type: precision
value: 0.63268156424581
- name: Recall
type: recall
value: 0.4198331788693234
- name: F1 (macro)
type: f1_macro
value: 0.4165125500830091
- name: Precision (macro)
type: precision_macro
value: 0.5356144444686111
- name: Recall (macro)
type: recall_macro
value: 0.3573954549633822
- name: F1 (entity span)
type: f1_entity_span
value: 0.6249999999999999
- name: Precision (entity span)
type: precision_entity_span
value: 0.7962697274031564
- name: Recall (entity span)
type: recall_entity_span
value: 0.5143651529193698
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-v3-large-wnut2017
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5047353760445682
- Precision (micro): 0.63268156424581
- Recall (micro): 0.4198331788693234
- F1 (macro): 0.4165125500830091
- Precision (macro): 0.5356144444686111
- Recall (macro): 0.3573954549633822
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.25477707006369427
- group: 0.34309623430962344
- location: 0.6187050359712232
- person: 0.6721763085399448
- product: 0.18579234972677597
- work_of_art: 0.42452830188679247
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.4752384997212858, 0.5329114690850492]
- 95%: [0.46929053844001617, 0.537282841423422]
- F1 (macro):
- 90%: [0.4752384997212858, 0.5329114690850492]
- 95%: [0.46929053844001617, 0.537282841423422]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-v3-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-v3-large
- crf: False
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
datasets:
- tner/conll2003
metrics:
- f1
- precision
- recall
model-index:
- name: tner/roberta-large-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/conll2003
type: tner/conll2003
args: tner/conll2003
metrics:
- name: F1
type: f1
value: 0.924769027716674
- name: Precision
type: precision
value: 0.9191883855168795
- name: Recall
type: recall
value: 0.9304178470254958
- name: F1 (macro)
type: f1_macro
value: 0.9110950780089749
- name: Precision (macro)
type: precision_macro
value: 0.9030546238754271
- name: Recall (macro)
type: recall_macro
value: 0.9197126371122274
- name: F1 (entity span)
type: f1_entity_span
value: 0.9619852164730729
- name: Precision (entity span)
type: precision_entity_span
value: 0.9562631210636809
- name: Recall (entity span)
type: recall_entity_span
value: 0.9677762039660056
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/roberta-large-conll2003
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
[tner/conll2003](https://huggingface.co/datasets/tner/conll2003) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.924769027716674
- Precision (micro): 0.9191883855168795
- Recall (micro): 0.9304178470254958
- F1 (macro): 0.9110950780089749
- Precision (macro): 0.9030546238754271
- Recall (macro): 0.9197126371122274
The per-entity breakdown of the F1 score on the test set are below:
- location: 0.9390573401380967
- organization: 0.9107142857142857
- other: 0.8247422680412372
- person: 0.9698664181422801
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.9185189408755685, 0.9309806929048586]
- 95%: [0.9174010190551032, 0.9318590917100465]
- F1 (macro):
- 90%: [0.9185189408755685, 0.9309806929048586]
- 95%: [0.9174010190551032, 0.9318590917100465]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-conll2003/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/roberta-large-conll2003/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/roberta-large-conll2003")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/conll2003']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: roberta-large
- crf: True
- max_length: 128
- epoch: 17
- batch_size: 64
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-conll2003/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_4_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_4_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7349
- F1: 0.8052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.5378 | 0.7818 |
| 0.5561 | 2.0 | 578 | 0.4835 | 0.8002 |
| 0.5561 | 3.0 | 867 | 0.6401 | 0.7978 |
| 0.2473 | 4.0 | 1156 | 0.8665 | 0.7842 |
| 0.2473 | 5.0 | 1445 | 0.9942 | 0.7965 |
| 0.1002 | 6.0 | 1734 | 1.1535 | 0.8015 |
| 0.0428 | 7.0 | 2023 | 1.2619 | 0.8027 |
| 0.0428 | 8.0 | 2312 | 1.4386 | 0.7990 |
| 0.017 | 9.0 | 2601 | 1.4864 | 0.8039 |
| 0.017 | 10.0 | 2890 | 1.4817 | 0.8015 |
| 0.0145 | 11.0 | 3179 | 1.5205 | 0.8052 |
| 0.0145 | 12.0 | 3468 | 1.6825 | 0.7842 |
| 0.0115 | 13.0 | 3757 | 1.6670 | 0.7990 |
| 0.0083 | 14.0 | 4046 | 1.7283 | 0.7904 |
| 0.0083 | 15.0 | 4335 | 1.6552 | 0.8039 |
| 0.0071 | 16.0 | 4624 | 1.6760 | 0.8076 |
| 0.0071 | 17.0 | 4913 | 1.6973 | 0.7891 |
| 0.0109 | 18.0 | 5202 | 1.6050 | 0.8027 |
| 0.0109 | 19.0 | 5491 | 1.6379 | 0.8126 |
| 0.0037 | 20.0 | 5780 | 1.6936 | 0.8039 |
| 0.0013 | 21.0 | 6069 | 1.7187 | 0.8027 |
| 0.0013 | 22.0 | 6358 | 1.7839 | 0.7965 |
| 0.0015 | 23.0 | 6647 | 1.7551 | 0.8015 |
| 0.0015 | 24.0 | 6936 | 1.7312 | 0.8064 |
| 0.001 | 25.0 | 7225 | 1.7349 | 0.8052 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dccuchile/albert-xxlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
datasets:
- tner/wnut2017
metrics:
- f1
- precision
- recall
model-index:
- name: tner/bertweet-large-wnut2017
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/wnut2017
type: tner/wnut2017
args: tner/wnut2017
metrics:
- name: F1
type: f1
value: 0.5302273987798114
- name: Precision
type: precision
value: 0.6602209944751382
- name: Recall
type: recall
value: 0.44300278035217794
- name: F1 (macro)
type: f1_macro
value: 0.4643459997680019
- name: Precision (macro)
type: precision_macro
value: 0.5792841925426832
- name: Recall (macro)
type: recall_macro
value: 0.3973128655628379
- name: F1 (entity span)
type: f1_entity_span
value: 0.6142697881828317
- name: Precision (entity span)
type: precision_entity_span
value: 0.7706293706293706
- name: Recall (entity span)
type: recall_entity_span
value: 0.5106580166821131
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/bertweet-large-wnut2017
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5302273987798114
- Precision (micro): 0.6602209944751382
- Recall (micro): 0.44300278035217794
- F1 (macro): 0.4643459997680019
- Precision (macro): 0.5792841925426832
- Recall (macro): 0.3973128655628379
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.3902439024390244
- group: 0.37130801687763715
- location: 0.6595744680851063
- person: 0.65474552957359
- product: 0.2857142857142857
- work_of_art: 0.4244897959183674
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.5002577319587629, 0.5587481638299118]
- 95%: [0.4947163587619384, 0.5629013150503995]
- F1 (macro):
- 90%: [0.5002577319587629, 0.5587481638299118]
- 95%: [0.4947163587619384, 0.5629013150503995]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bertweet-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/bertweet-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/bertweet-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: vinai/bertweet-large
- crf: False
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bertweet-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xxlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
datasets:
- tner/wnut2017
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-large-wnut2017
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/wnut2017
type: tner/wnut2017
args: tner/wnut2017
metrics:
- name: F1
type: f1
value: 0.5105386416861827
- name: Precision
type: precision
value: 0.6931637519872814
- name: Recall
type: recall
value: 0.4040778498609824
- name: F1 (macro)
type: f1_macro
value: 0.4263428845085451
- name: Precision (macro)
type: precision_macro
value: 0.6003185137596864
- name: Recall (macro)
type: recall_macro
value: 0.35195768262641947
- name: F1 (entity span)
type: f1_entity_span
value: 0.5936768149882904
- name: Precision (entity span)
type: precision_entity_span
value: 0.8060413354531002
- name: Recall (entity span)
type: recall_entity_span
value: 0.46987951807228917
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-large-wnut2017
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5105386416861827
- Precision (micro): 0.6931637519872814
- Recall (micro): 0.4040778498609824
- F1 (macro): 0.4263428845085451
- Precision (macro): 0.6003185137596864
- Recall (macro): 0.35195768262641947
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.3503649635036496
- group: 0.3148148148148148
- location: 0.6029411764705882
- person: 0.6628895184135977
- product: 0.1951219512195122
- work_of_art: 0.431924882629108
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.47970650356554456, 0.5385161869734422]
- 95%: [0.47475901512925966, 0.5430870496346687]
- F1 (macro):
- 90%: [0.47970650356554456, 0.5385161869734422]
- 95%: [0.47475901512925966, 0.5430870496346687]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xxlarge-spanish-finetuned-pos | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
datasets:
- tner/conll2003
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-v3-large-conll2003
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/conll2003
type: tner/conll2003
args: tner/conll2003
metrics:
- name: F1
type: f1
value: 0.9222388190844389
- name: Precision
type: precision
value: 0.9154020582592011
- name: Recall
type: recall
value: 0.9291784702549575
- name: F1 (macro)
type: f1_macro
value: 0.9043961692086329
- name: Precision (macro)
type: precision_macro
value: 0.8959854326377331
- name: Recall (macro)
type: recall_macro
value: 0.9135442454672595
- name: F1 (entity span)
type: f1_entity_span
value: 0.960570322126386
- name: Precision (entity span)
type: precision_entity_span
value: 0.9550227511375569
- name: Recall (entity span)
type: recall_entity_span
value: 0.9661827195467422
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-v3-large-conll2003
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
[tner/conll2003](https://huggingface.co/datasets/tner/conll2003) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.9222388190844389
- Precision (micro): 0.9154020582592011
- Recall (micro): 0.9291784702549575
- F1 (macro): 0.9043961692086329
- Precision (macro): 0.8959854326377331
- Recall (macro): 0.9135442454672595
The per-entity breakdown of the F1 score on the test set are below:
- location: 0.9407496977025392
- organization: 0.9115486335586247
- other: 0.7920110192837466
- person: 0.9732753262896209
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.9157944386463721, 0.9286928993636353]
- 95%: [0.9146558483630953, 0.9297919809412201]
- F1 (macro):
- 90%: [0.9157944386463721, 0.9286928993636353]
- 95%: [0.9146558483630953, 0.9297919809412201]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-conll2003/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-conll2003/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-v3-large-conll2003")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/conll2003']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-v3-large
- crf: False
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-conll2003/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
datasets:
- tner/bc5cdr
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-v3-large-bc5cdr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/bc5cdr
type: tner/bc5cdr
args: tner/bc5cdr
metrics:
- name: F1
type: f1
value: 0.8902493653874869
- name: Precision
type: precision
value: 0.8697724178175452
- name: Recall
type: recall
value: 0.9117137322866755
- name: F1 (macro)
type: f1_macro
value: 0.8863403908610603
- name: Precision (macro)
type: precision_macro
value: 0.8657302393432342
- name: Recall (macro)
type: recall_macro
value: 0.9080747413030301
- name: F1 (entity span)
type: f1_entity_span
value: 0.8929371360310587
- name: Precision (entity span)
type: precision_entity_span
value: 0.8723983660766388
- name: Recall (entity span)
type: recall_entity_span
value: 0.9144663064532572
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-v3-large-bc5cdr
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
[tner/bc5cdr](https://huggingface.co/datasets/tner/bc5cdr) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.8902493653874869
- Precision (micro): 0.8697724178175452
- Recall (micro): 0.9117137322866755
- F1 (macro): 0.8863403908610603
- Precision (macro): 0.8657302393432342
- Recall (macro): 0.9080747413030301
The per-entity breakdown of the F1 score on the test set are below:
- chemical: 0.9298502009499452
- disease: 0.8428305807721753
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.885162383660078, 0.8951239957151518]
- 95%: [0.8838793313408008, 0.8959517574197015]
- F1 (macro):
- 90%: [0.885162383660078, 0.8951239957151518]
- 95%: [0.8838793313408008, 0.8959517574197015]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-bc5cdr/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-bc5cdr/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-v3-large-bc5cdr")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/bc5cdr']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-v3-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-bc5cdr/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Bio_ClinicalBERT_fold_10_ternary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT_fold_10_ternary_v1
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0706
- F1: 0.7748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.6097 | 0.7290 |
| 0.555 | 2.0 | 580 | 0.6106 | 0.7649 |
| 0.555 | 3.0 | 870 | 0.6608 | 0.7847 |
| 0.2449 | 4.0 | 1160 | 0.8894 | 0.7809 |
| 0.2449 | 5.0 | 1450 | 1.1049 | 0.7760 |
| 0.1055 | 6.0 | 1740 | 1.2951 | 0.7884 |
| 0.0338 | 7.0 | 2030 | 1.4809 | 0.7760 |
| 0.0338 | 8.0 | 2320 | 1.4751 | 0.7698 |
| 0.0225 | 9.0 | 2610 | 1.6648 | 0.7809 |
| 0.0225 | 10.0 | 2900 | 1.7174 | 0.7772 |
| 0.006 | 11.0 | 3190 | 1.7872 | 0.7735 |
| 0.006 | 12.0 | 3480 | 1.7803 | 0.7748 |
| 0.0161 | 13.0 | 3770 | 1.9302 | 0.7735 |
| 0.0005 | 14.0 | 4060 | 1.9853 | 0.7748 |
| 0.0005 | 15.0 | 4350 | 2.0043 | 0.7735 |
| 0.0062 | 16.0 | 4640 | 1.9969 | 0.7760 |
| 0.0062 | 17.0 | 4930 | 2.0173 | 0.7760 |
| 0.0068 | 18.0 | 5220 | 1.9891 | 0.7785 |
| 0.0034 | 19.0 | 5510 | 1.9951 | 0.7797 |
| 0.0034 | 20.0 | 5800 | 2.0283 | 0.7748 |
| 0.0049 | 21.0 | 6090 | 1.9985 | 0.7834 |
| 0.0049 | 22.0 | 6380 | 2.0131 | 0.7760 |
| 0.0011 | 23.0 | 6670 | 2.0526 | 0.7748 |
| 0.0011 | 24.0 | 6960 | 2.0662 | 0.7748 |
| 0.001 | 25.0 | 7250 | 2.0706 | 0.7748 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dccuchile/distilbert-base-spanish-uncased-finetuned-ner | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | Note: this model is deprecated, please use https://huggingface.co/songlab/gpn-brassicales |
dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
tags:
- conversational
---
#harry pitter dialoGPT model |
dccuchile/distilbert-base-spanish-uncased-finetuned-pos | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099_1
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.3454
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099_1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8040
- Bleu: 7.3454
- Gen Len: 44.8149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rebolforces/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-1b0000
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 1.1101
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1b0000
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7760
- Bleu: 1.1101
- Gen Len: 99.5898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
dccuchile/distilbert-base-spanish-uncased | [
"pytorch",
"distilbert",
"fill-mask",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 670 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099_8
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 6.231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099_8
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9641
- Bleu: 6.231
- Gen Len: 50.1911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chae/botman | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chaewon/mmnt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-08-10T05:05:11Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-test
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5082
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8241
- Bleu: 7.5082
- Gen Len: 44.0405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CharlieChen/feedback-bigbird | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T06:34:09Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
CheonggyeMountain-Sherpa/kogpt-trinity-poem | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
language: ga
datasets:
- common_voice
- living-audio-Irish
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- ga-IE
- speech
- Irish
- Gaelic
model-index:
- name: Wav2vec 2.0 large 300m XLS-R
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 10.0
type: common_voice
args: ga-IE
metrics:
- name: Test WER
type: wer
value: 25.94
---
# Irish-Gaelic Automatic Speech Recognition
This is the model for Irish ASR. It has been trained on the Common-voice dataset and living Irish audio dataset. The Common-voice code for the Irish language is ga-IE. From the Common voice dataset, all the Validated audio clips and all the living audio clips were taken into account and after a random train-test split, 90% of the total dataset (5156 utterances) were taken for training, and the rest of the 10% of real data (579 utterances) were taken for testing.
This dataset was finetuned on wav2vec2-large-xls-r-300m. On the testing dataset, 25.94% of WER could be achieved.
### How to use
Example of transcribing the Common Voice audio clip from the invalidated dataset, using GPU if available. The model expects 16kHz audio.
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model = Wav2Vec2ForCTC.from_pretrained("Aditya3107/wav2vec2-large-xls-r-1b-ga-ie")
processor = Wav2Vec2Processor.from_pretrained("Aditya3107/wav2vec2-large-xls-r-1b-ga-ie")
# Reading taken audio clip
import librosa, torch
audio, rate = librosa.load("common-voice-irish/common_voice/cv-corpus-10.0-2022-07-04/ga-IE/clips/common_voice_ga-IE_1818627.mp3", sr = 16000)
# Taking an input value
input_values = processor(audio, sampling_rate=16_000, return_tensors = "pt", padding="longest").input_values
# Storing logits (non-normalized prediction values)
logits = model(input_values).logits
# Storing predicted ids
prediction = torch.argmax(logits, dim = -1)
# Passing the prediction to the tokenizer decode to get the transcription
transcription = processor.batch_decode(prediction)[0]
print(transcription)
```
### Results
Example of the transcribed audio clips and testing on SCLITE.
```
Speaker sentences 0: #utts: 1
id: (common_voice_ga-IE_17401296.mp3)
Scores: (#C #S #D #I) 4 1 0 0
Attributes: Case_sensitve
REF: an bhfuil cóta bán óir
HYP: an bhfuil cóta bán air
Eval: S
id: (common_voice_ga-IE_17410244.mp3)
Scores: (#C #S #D #I) 3 1 0 2
Attributes: Case_sensitve
REF: *** ** an bud é sin
HYP: cad é an rud é sin
Eval: I I S
id: (common_voice_ga-IE_17410257.mp3)
Scores: (#C #S #D #I) 9 2 1 2
Attributes: Case_sensitve
REF: i gabhaim buíochas libh a chairde ******* ** támindéagtstruth le tuilleadh uaibh ar baá
HYP: * gabhaim buíochas libh a chairde táimid ag tsnúth le tuilleadh uaibh ar ball
Eval: D I I S S
id: (common_voice_ga-IE_17410401.mp3)
Scores: (#C #S #D #I) 6 1 0 0
Attributes: Case_sensitve
REF: níl ach tá peann ina phóca uige
HYP: níl ach tá peann ina phóca aige
Eval: S
id: (common_voice_ga-IE_17410403.mp3)
Scores: (#C #S #D #I) 5 1 0 1
Attributes: Case_sensitve
REF: agus *** cadé an dath atá air
HYP: agus cad é an dath atá air
Eval: I S
id: (common_voice_ga-IE_17410412.mp3)
Scores: (#C #S #D #I) 6 2 0 0
Attributes: Case_sensitve
REF: is lá é seo chun ceiliúradh a dhéan
HYP: is lá é seo chun céiliúradh a dhéanamh
Eval: S S
id: (common_voice_ga-IE_17444712.mp3)
Scores: (#C #S #D #I) 4 6 0 0
Attributes: Case_sensitve
REF: don chathaoileach mirín de brom don stiúrdhóirat liam ón maoladha
HYP: don chathaoirleach máirín de brún don stiúrthóir liam ó maolaodha
Eval: S S S S S S
id: (common_voice_ga-IE_17449454.mp3)
Scores: (#C #S #D #I) 4 0 0 0
Attributes: Case_sensitve
REF: ceacht a trí déag
HYP: ceacht a trí déag
Eval:
```
### Future Tasks
The language model with KenLM will be added if any good resource of Irish text is found.
### Citation
If you want to cite this model you can use this:
```
@MISC {,
author = "Aditya Parikh",
title = "Finetuned XLS-R model for Irish (Ga-IE) language for Automatic Speech Recognition",
howpublished = "{\url{https://huggingface.co/Aditya3107/wav2vec2-large-xls-r-1b-ga-ie}}",
month = "aug",
year = "2022"
}
``` |
CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper | [
"ko",
"gpt2",
"license:cc-by-nc-sa-4.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9195
- name: F1
type: f1
value: 0.9194694114253713
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2308
- Accuracy: 0.9195
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8402 | 1.0 | 250 | 0.3406 | 0.8945 | 0.8907 |
| 0.258 | 2.0 | 500 | 0.2308 | 0.9195 | 0.9195 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chertilasus/main | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T08:28:09Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: XLM-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# XLM-roberta-finetuned
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
Chester/traffic-rec | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T08:29:27Z | ---
tags:
- conversational
---
# Rick and Morty DialoGPT Model |
Chikita1/www_stash_stock | [
"license:bsd-3-clause-clear"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- unconditional-image-generation
license: apache-2.0
---
# Model info
Project [fbanimegan](https://github.com/SkyTNT/fbanimegan)
### fbanime.pkl
StyleGan2 model trained with official [StyleGan3](https://github.com/NVlabs/stylegan3).
But I modified the code (networks_stylegan2.py and dataset.py) to support non-square resolutions.
FID: 1.4
### fbanime_fp32.pkl
fp32 version of fbanime.pkl
Note: Fp16 version (fbanime.pkl) only works on gpu. And fp32 version works on gpu and cpu.
### g_mapping.onnx
onnx format mapping network of fbanime_fp32.pkl
### g_synthesis.onnx
onnx format synthesis network of fbanime_fp32.pkl
### encoder.onnx
e4e model trained with [encoder4editing-stylegan3](https://github.com/yj7082126/encoder4editing-stylegan3).
I add support for official StyleGan2 model and change backbone to ResNet-34 in [restyle-encoder](https://github.com/yuval-alaluf/restyle-encoder).
### waifu_dect.onnx
YOLOv5 model trained with official [YOLOv5](https://github.com/ultralytics/yolov5)
# Usage
see [demo](https://huggingface.co/spaces/skytnt/full-body-anime-gan/blob/main/app.py)
# Dataset
[fbanimehq](https://huggingface.co/datasets/skytnt/fbanimehq) v2.0
|
Ching/negation_detector | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-08-10T08:46:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Precision: 0.9288
- Recall: 0.9388
- F1: 0.9338
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2456 | 1.0 | 878 | 0.0683 | 0.9151 | 0.9223 | 0.9187 | 0.9814 |
| 0.0542 | 2.0 | 1756 | 0.0609 | 0.9227 | 0.9335 | 0.9281 | 0.9829 |
| 0.0293 | 3.0 | 2634 | 0.0614 | 0.9288 | 0.9388 | 0.9338 | 0.9840 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
Chiuchiyin/DialoGPT-small-Donald | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data_nlp
metrics:
- precision
- recall
- f1
model-index:
- name: sd-geneprod-roles-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data_nlp
type: source_data_nlp
args: GENEPROD_ROLES
metrics:
- name: Precision
type: precision
value: 0.9227577212638568
- name: Recall
type: recall
value: 0.9288143683990692
- name: F1
type: f1
value: 0.9257761389318425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-geneprod-roles-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0136
- Accuracy Score: 0.9950
- Precision: 0.9228
- Recall: 0.9288
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.014 | 1.0 | 1569 | 0.0136 | 0.9950 | 0.9228 | 0.9288 | 0.9258 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
|
ChoboAvenger/DialoGPT-small-DocBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T09:09:39Z | Used for a regression test addressing [this issue](https://github.com/huggingface/huggingface_hub/issues/981). |
ChrisP/xlm-roberta-base-finetuned-marc-en | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T09:26:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9371173258315406
- name: Recall
type: recall
value: 0.9530461124200605
- name: F1
type: f1
value: 0.945014601585315
- name: Accuracy
type: accuracy
value: 0.9865338199799847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9371
- Recall: 0.9530
- F1: 0.9450
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0883 | 1.0 | 1756 | 0.0690 | 0.9181 | 0.9320 | 0.9250 | 0.9821 |
| 0.0334 | 2.0 | 3512 | 0.0623 | 0.9279 | 0.9504 | 0.9390 | 0.9858 |
| 0.0189 | 3.0 | 5268 | 0.0599 | 0.9371 | 0.9530 | 0.9450 | 0.9865 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ChrisVCB/DialoGPT-medium-cmjs | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T09:35:23Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mvicentel/ddpm-butterflies-128/tensorboard?#scalars)
|
Chuah/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-08-10T10:03:35Z | ---
datasets:
- tner/tweebank_ner
metrics:
- f1
- precision
- recall
model-index:
- name: tner/roberta-large-tweebank-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/tweebank_ner
type: tner/tweebank_ner
args: tner/tweebank_ner
metrics:
- name: F1
type: f1
value: 0.7439490445859872
- name: Precision
type: precision
value: 0.7121951219512195
- name: Recall
type: recall
value: 0.7786666666666666
- name: F1 (macro)
type: f1_macro
value: 0.7354319457314183
- name: Precision (macro)
type: precision_macro
value: 0.712928566565599
- name: Recall (macro)
type: recall_macro
value: 0.7620465365030582
- name: F1 (entity span)
type: f1_entity_span
value: 0.8178343949044585
- name: Precision (entity span)
type: precision_entity_span
value: 0.7829268292682927
- name: Recall (entity span)
type: recall_entity_span
value: 0.856
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/roberta-large-tweebank-ner
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
[tner/tweebank_ner](https://huggingface.co/datasets/tner/tweebank_ner) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.7439490445859872
- Precision (micro): 0.7121951219512195
- Recall (micro): 0.7786666666666666
- F1 (macro): 0.7354319457314183
- Precision (macro): 0.712928566565599
- Recall (macro): 0.7620465365030582
The per-entity breakdown of the F1 score on the test set are below:
- location: 0.7782805429864253
- organization: 0.7377049180327869
- other: 0.5520581113801453
- person: 0.8736842105263157
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.7156413818791614, 0.771698046498159]
- 95%: [0.7063867669973017, 0.7763088810979543]
- F1 (macro):
- 90%: [0.7156413818791614, 0.771698046498159]
- 95%: [0.7063867669973017, 0.7763088810979543]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/roberta-large-tweebank-ner")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweebank_ner']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: roberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 64
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-tweebank-ner/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
ChukSamuels/DialoGPT-small-Dr.FauciBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
datasets:
- tner/tweebank_ner
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-v3-large-tweebank-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/tweebank_ner
type: tner/tweebank_ner
args: tner/tweebank_ner
metrics:
- name: F1
type: f1
value: 0.7253474520185308
- name: Precision
type: precision
value: 0.7201051248357424
- name: Recall
type: recall
value: 0.7306666666666667
- name: F1 (macro)
type: f1_macro
value: 0.701874697798745
- name: Precision (macro)
type: precision_macro
value: 0.7043005470796733
- name: Recall (macro)
type: recall_macro
value: 0.706915721861374
- name: F1 (entity span)
type: f1_entity_span
value: 0.8178343949044585
- name: Precision (entity span)
type: precision_entity_span
value: 0.7829268292682927
- name: Recall (entity span)
type: recall_entity_span
value: 0.856
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-v3-large-tweebank-ner
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
[tner/tweebank_ner](https://huggingface.co/datasets/tner/tweebank_ner) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.7253474520185308
- Precision (micro): 0.7201051248357424
- Recall (micro): 0.7306666666666667
- F1 (macro): 0.701874697798745
- Precision (macro): 0.7043005470796733
- Recall (macro): 0.706915721861374
The per-entity breakdown of the F1 score on the test set are below:
- location: 0.7289719626168224
- organization: 0.7040816326530612
- other: 0.5182926829268293
- person: 0.856152512998267
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.6978100031831928, 0.7529703029130037]
- 95%: [0.691700704571692, 0.7582901338971108]
- F1 (macro):
- 90%: [0.6978100031831928, 0.7529703029130037]
- 95%: [0.691700704571692, 0.7582901338971108]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-v3-large-tweebank-ner")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/tweebank_ner']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-v3-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 4
- weight_decay: 1e-07
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-tweebank-ner/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
Chun/DialoGPT-large-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_1_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_1_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7812
- F1: 0.8161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3938 | 0.8019 |
| 0.4444 | 2.0 | 576 | 0.3945 | 0.8086 |
| 0.4444 | 3.0 | 864 | 0.4738 | 0.8245 |
| 0.2504 | 4.0 | 1152 | 0.6641 | 0.8123 |
| 0.2504 | 5.0 | 1440 | 0.8714 | 0.7863 |
| 0.159 | 6.0 | 1728 | 0.9177 | 0.8179 |
| 0.0832 | 7.0 | 2016 | 1.1719 | 0.8129 |
| 0.0832 | 8.0 | 2304 | 1.2858 | 0.8146 |
| 0.046 | 9.0 | 2592 | 1.2557 | 0.8181 |
| 0.046 | 10.0 | 2880 | 1.3332 | 0.8033 |
| 0.0313 | 11.0 | 3168 | 1.2840 | 0.8112 |
| 0.0313 | 12.0 | 3456 | 1.4164 | 0.8175 |
| 0.0246 | 13.0 | 3744 | 1.3709 | 0.8143 |
| 0.0173 | 14.0 | 4032 | 1.4319 | 0.8179 |
| 0.0173 | 15.0 | 4320 | 1.5706 | 0.8195 |
| 0.0138 | 16.0 | 4608 | 1.6072 | 0.8230 |
| 0.0138 | 17.0 | 4896 | 1.7454 | 0.8192 |
| 0.0016 | 18.0 | 5184 | 1.7281 | 0.8099 |
| 0.0016 | 19.0 | 5472 | 1.7692 | 0.8151 |
| 0.0088 | 20.0 | 5760 | 1.7376 | 0.8132 |
| 0.0081 | 21.0 | 6048 | 1.7715 | 0.8086 |
| 0.0081 | 22.0 | 6336 | 1.7400 | 0.8152 |
| 0.0053 | 23.0 | 6624 | 1.7845 | 0.8099 |
| 0.0053 | 24.0 | 6912 | 1.8096 | 0.8150 |
| 0.0062 | 25.0 | 7200 | 1.7812 | 0.8161 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chun/w-en2zh-hsk | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-08-10T10:27:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data_nlp
metrics:
- precision
- recall
- f1
model-index:
- name: sd-panelization-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data_nlp
type: source_data_nlp
args: PANELIZATION
metrics:
- name: Precision
type: precision
value: 0.9134245120169964
- name: Recall
type: recall
value: 0.9494824016563147
- name: F1
type: f1
value: 0.9311044937736871
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-panelization-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0050
- Accuracy Score: 0.9982
- Precision: 0.9134
- Recall: 0.9495
- F1: 0.9311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0048 | 1.0 | 431 | 0.0050 | 0.9982 | 0.9134 | 0.9495 | 0.9311 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
|
Chun/w-en2zh-otm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T10:39:14Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_2_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_2_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8748
- F1: 0.8066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4803 | 0.7433 |
| 0.434 | 2.0 | 580 | 0.4385 | 0.8099 |
| 0.434 | 3.0 | 870 | 0.5382 | 0.8078 |
| 0.254 | 4.0 | 1160 | 0.6944 | 0.7982 |
| 0.254 | 5.0 | 1450 | 0.9908 | 0.8058 |
| 0.1479 | 6.0 | 1740 | 1.1090 | 0.8062 |
| 0.0874 | 7.0 | 2030 | 1.2405 | 0.8042 |
| 0.0874 | 8.0 | 2320 | 1.3174 | 0.8012 |
| 0.0505 | 9.0 | 2610 | 1.5211 | 0.7909 |
| 0.0505 | 10.0 | 2900 | 1.4014 | 0.8126 |
| 0.0301 | 11.0 | 3190 | 1.4798 | 0.8047 |
| 0.0301 | 12.0 | 3480 | 1.4668 | 0.8091 |
| 0.0279 | 13.0 | 3770 | 1.5286 | 0.8075 |
| 0.0233 | 14.0 | 4060 | 1.6752 | 0.8006 |
| 0.0233 | 15.0 | 4350 | 1.5265 | 0.8132 |
| 0.019 | 16.0 | 4640 | 1.6440 | 0.7949 |
| 0.019 | 17.0 | 4930 | 1.7471 | 0.8097 |
| 0.0096 | 18.0 | 5220 | 1.7329 | 0.8121 |
| 0.0075 | 19.0 | 5510 | 1.7472 | 0.8191 |
| 0.0075 | 20.0 | 5800 | 1.8043 | 0.8161 |
| 0.0052 | 21.0 | 6090 | 1.8102 | 0.8141 |
| 0.0052 | 22.0 | 6380 | 1.7944 | 0.8116 |
| 0.0044 | 23.0 | 6670 | 1.8211 | 0.8141 |
| 0.0044 | 24.0 | 6960 | 1.8741 | 0.8066 |
| 0.0046 | 25.0 | 7250 | 1.8748 | 0.8066 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chun/w-zh2en-mto | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T11:04:18Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 340.50 +/- 183.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sofiaoliveira -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sofiaoliveira
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.05),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 1000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Chungu424/DATA | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T11:09:02Z | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- cifar10
---
# ConvNext-tiny-finetuned-cifar10 (tiny-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Convnext tiny finetuned on cifar 10 dataset. Which has ten classes.
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
|
Chungu424/qazwsx | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_3_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_3_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8649
- F1: 0.8044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4483 | 0.8000 |
| 0.4228 | 2.0 | 578 | 0.4264 | 0.8040 |
| 0.4228 | 3.0 | 867 | 0.5341 | 0.8056 |
| 0.2409 | 4.0 | 1156 | 0.9077 | 0.8103 |
| 0.2409 | 5.0 | 1445 | 1.1069 | 0.7889 |
| 0.1386 | 6.0 | 1734 | 1.0288 | 0.8093 |
| 0.0817 | 7.0 | 2023 | 1.2477 | 0.8049 |
| 0.0817 | 8.0 | 2312 | 1.5915 | 0.7872 |
| 0.0465 | 9.0 | 2601 | 1.5323 | 0.8035 |
| 0.0465 | 10.0 | 2890 | 1.4351 | 0.7989 |
| 0.0376 | 11.0 | 3179 | 1.4639 | 0.7916 |
| 0.0376 | 12.0 | 3468 | 1.6027 | 0.7956 |
| 0.0234 | 13.0 | 3757 | 1.7860 | 0.7931 |
| 0.0109 | 14.0 | 4046 | 1.8567 | 0.7934 |
| 0.0109 | 15.0 | 4335 | 1.8294 | 0.8053 |
| 0.0115 | 16.0 | 4624 | 1.7799 | 0.7971 |
| 0.0115 | 17.0 | 4913 | 1.5935 | 0.8000 |
| 0.0142 | 18.0 | 5202 | 1.8136 | 0.8066 |
| 0.0142 | 19.0 | 5491 | 1.7718 | 0.8063 |
| 0.0124 | 20.0 | 5780 | 1.8581 | 0.8053 |
| 0.0083 | 21.0 | 6069 | 1.8523 | 0.8056 |
| 0.0083 | 22.0 | 6358 | 1.8408 | 0.8035 |
| 0.0045 | 23.0 | 6647 | 1.8347 | 0.8040 |
| 0.0045 | 24.0 | 6936 | 1.8683 | 0.8067 |
| 0.0005 | 25.0 | 7225 | 1.8649 | 0.8044 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chungu424/repodata | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ft1500_norm300
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm300
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0940
- Mse: 4.3760
- Mae: 1.4084
- R2: 0.4625
- Accuracy: 0.3517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.7424 | 1.0 | 3122 | 1.1071 | 4.4286 | 1.4098 | 0.4561 | 0.3338 |
| 0.5038 | 2.0 | 6244 | 1.1794 | 4.7177 | 1.4140 | 0.4205 | 0.3677 |
| 0.356 | 3.0 | 9366 | 1.0717 | 4.2866 | 1.3852 | 0.4735 | 0.3581 |
| 0.2293 | 4.0 | 12488 | 1.0940 | 4.3760 | 1.4084 | 0.4625 | 0.3517 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ci/Pai | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T11:27:49Z | ---
license: afl-3.0
---
---
About :
This model can be used for text summarization.
The dataset on which it was fine tuned consisted of 10,323 articles.
The Data Fields :
- "Headline" : title of the article
- "articleBody" : the main article content
- "source" : the link to the readmore page.
The data splits were :
- Train : 8258.
- Vaildation : 2065.
### How to use along with pipeline
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSeq2Seq
tokenizer = AutoTokenizer.from_pretrained("AkashKhamkar/InSumT510k")
model = AutoModelForSeq2SeqLM.from_pretrained("AkashKhamkar/InSumT510k")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
summarizer("Text for summarization...", min_length=5, max_length=50)
```
language:
- English
library_name: Pytorch
tags:
- Summarization
- T5-base
- Conditional Modelling
-
|
Cilan/dalle-knockoff | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_4_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_4_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5724
- F1: 0.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 289 | 0.4043 | 0.8009 |
| 0.4373 | 2.0 | 578 | 0.4093 | 0.8260 |
| 0.4373 | 3.0 | 867 | 0.5084 | 0.8206 |
| 0.2707 | 4.0 | 1156 | 0.5945 | 0.8087 |
| 0.2707 | 5.0 | 1445 | 0.6389 | 0.8251 |
| 0.1691 | 6.0 | 1734 | 0.8131 | 0.8156 |
| 0.1012 | 7.0 | 2023 | 0.9865 | 0.8190 |
| 0.1012 | 8.0 | 2312 | 1.1356 | 0.8342 |
| 0.0506 | 9.0 | 2601 | 1.0624 | 0.8369 |
| 0.0506 | 10.0 | 2890 | 1.2604 | 0.8255 |
| 0.0384 | 11.0 | 3179 | 1.2648 | 0.8183 |
| 0.0384 | 12.0 | 3468 | 1.3763 | 0.8158 |
| 0.0318 | 13.0 | 3757 | 1.4966 | 0.8217 |
| 0.0221 | 14.0 | 4046 | 1.3889 | 0.8250 |
| 0.0221 | 15.0 | 4335 | 1.4014 | 0.8284 |
| 0.0145 | 16.0 | 4624 | 1.5321 | 0.8289 |
| 0.0145 | 17.0 | 4913 | 1.4914 | 0.8233 |
| 0.0172 | 18.0 | 5202 | 1.3946 | 0.8314 |
| 0.0172 | 19.0 | 5491 | 1.5032 | 0.8269 |
| 0.0135 | 20.0 | 5780 | 1.5111 | 0.8328 |
| 0.0087 | 21.0 | 6069 | 1.4899 | 0.8318 |
| 0.0087 | 22.0 | 6358 | 1.5562 | 0.8311 |
| 0.0061 | 23.0 | 6647 | 1.5384 | 0.8327 |
| 0.0061 | 24.0 | 6936 | 1.5798 | 0.8304 |
| 0.0052 | 25.0 | 7225 | 1.5724 | 0.8315 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Cinnamon/electra-small-japanese-discriminator | [
"pytorch",
"electra",
"pretraining",
"ja",
"transformers",
"license:apache-2.0"
] | null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 419 | 2022-08-10T11:56:42Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- metrics:
- type: mean_reward
value: 17.60 +/- 26.37
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Ciruzzo/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- vision
- zero-shot-image-classification
- endpoints-template
library_name: generic
---
# Fork of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) for a `zero-sho-image-classification` Inference endpoint.
This repository implements a `custom` task for `zero-shot-image-classification` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/clip-zero-shot-image-classification/blob/main/pipeline.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
### expected Request payload
```json
{
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
"candiates":["sea","palace","car","ship"]
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
1. prepare an image.
```bash
!wget https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
```
2. run request
```python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None, candiates: List[str] = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs": {"image": b64.decode("utf-8"), "candiates": candiates}}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="palace.jpg", candiates=["sea", "palace", "car", "ship"]
)
```
expected output
```python
[{'label': 'palace', 'score': 0.9996134638786316},
{'label': 'car', 'score': 0.0002602009626571089},
{'label': 'ship', 'score': 0.00011758189066313207},
{'label': 'sea', 'score': 8.666840585647151e-06}]
```
|
Ciruzzo/DialoGPT-small-hattypotter | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T12:09:25Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_5_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_5_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7395
- F1: 0.8206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4246 | 0.8154 |
| 0.4211 | 2.0 | 576 | 0.5181 | 0.8063 |
| 0.4211 | 3.0 | 864 | 0.4939 | 0.8149 |
| 0.2483 | 4.0 | 1152 | 0.6181 | 0.8227 |
| 0.2483 | 5.0 | 1440 | 0.9251 | 0.8006 |
| 0.1512 | 6.0 | 1728 | 0.9639 | 0.8082 |
| 0.0858 | 7.0 | 2016 | 1.1315 | 0.8074 |
| 0.0858 | 8.0 | 2304 | 1.1322 | 0.8303 |
| 0.053 | 9.0 | 2592 | 1.3171 | 0.8017 |
| 0.053 | 10.0 | 2880 | 1.3729 | 0.8100 |
| 0.0325 | 11.0 | 3168 | 1.2708 | 0.8252 |
| 0.0325 | 12.0 | 3456 | 1.5105 | 0.8242 |
| 0.0203 | 13.0 | 3744 | 1.4902 | 0.8233 |
| 0.0179 | 14.0 | 4032 | 1.5874 | 0.8194 |
| 0.0179 | 15.0 | 4320 | 1.5933 | 0.8135 |
| 0.0174 | 16.0 | 4608 | 1.5908 | 0.8088 |
| 0.0174 | 17.0 | 4896 | 1.5692 | 0.8249 |
| 0.0129 | 18.0 | 5184 | 1.6597 | 0.8167 |
| 0.0129 | 19.0 | 5472 | 1.6009 | 0.8218 |
| 0.0095 | 20.0 | 5760 | 1.6962 | 0.8225 |
| 0.0062 | 21.0 | 6048 | 1.7075 | 0.8182 |
| 0.0062 | 22.0 | 6336 | 1.7335 | 0.8181 |
| 0.0077 | 23.0 | 6624 | 1.7175 | 0.8204 |
| 0.0077 | 24.0 | 6912 | 1.7680 | 0.8187 |
| 0.0024 | 25.0 | 7200 | 1.7395 | 0.8206 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ClaudeCOULOMBE/RickBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Tn_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Tn_update
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tn](https://huggingface.co/Helsinki-NLP/opus-mt-en-tn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.13002
- Bleu: 39.1470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
|Epoch| Training Loss | Validation Loss| Bleu |
|:---:|:---------------:|:----------------:|:-------:|
| 1 | 1.929300 | 1.884056 | 29.762382|
| 2 | 1.637300 | 1.605588 | 32.846868|
| 3 | 1.500000 | 1.457442 | 34.307484|
| 4 | 1.402400 | 1.356578 | 35.423774|
| 5 | 1.324000 | 1.276492 | 36.553368|
| 6 | 1.251300 | 1.221768 | 37.464270|
| 7 | 1.224700 | 1.181320 | 38.157490|
| 8 | 1.193200 | 1.152997 | 38.800566|
| 9 | 1.166700 | 1.136147 | 38.985707|
| 10 | 1.142500 | 1.130020 | 39.209327|
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CleveGreen/FieldClassifier_v2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | 2022-08-10T12:23:04Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mojtaba767/bert-base-parsbert-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mojtaba767/bert-base-parsbert-uncased-finetuned-imdb
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6698
- Validation Loss: 4.3501
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.6698 | 4.3501 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
CleveGreen/FieldClassifier_v2_gpt | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2022-08-10T12:35:02Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: categorization-finetuned-20220721-164940-distilled-20220810-123313
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# categorization-finetuned-20220721-164940-distilled-20220810-123313
This model is a fine-tuned version of [carted-nlp/categorization-finetuned-20220721-164940](https://huggingface.co/carted-nlp/categorization-finetuned-20220721-164940) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0787
- Accuracy: 0.8416
- F1: 0.8396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 314
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.2976 | 0.56 | 2500 | 0.1441 | 0.7219 | 0.7071 |
| 0.1417 | 1.12 | 5000 | 0.1180 | 0.7719 | 0.7653 |
| 0.1236 | 1.69 | 7500 | 0.1076 | 0.7901 | 0.7854 |
| 0.1148 | 2.25 | 10000 | 0.1014 | 0.8015 | 0.7977 |
| 0.1092 | 2.81 | 12500 | 0.0972 | 0.8089 | 0.8052 |
| 0.1043 | 3.37 | 15000 | 0.0942 | 0.8135 | 0.8102 |
| 0.1013 | 3.94 | 17500 | 0.0916 | 0.8181 | 0.8147 |
| 0.0985 | 4.5 | 20000 | 0.0897 | 0.8219 | 0.8190 |
| 0.0962 | 5.06 | 22500 | 0.0881 | 0.8241 | 0.8215 |
| 0.0945 | 5.62 | 25000 | 0.0866 | 0.8270 | 0.8246 |
| 0.0928 | 6.19 | 27500 | 0.0857 | 0.8286 | 0.8262 |
| 0.0912 | 6.75 | 30000 | 0.0843 | 0.8310 | 0.8286 |
| 0.0901 | 7.31 | 32500 | 0.0836 | 0.8321 | 0.8299 |
| 0.0887 | 7.87 | 35000 | 0.0827 | 0.8339 | 0.8315 |
| 0.0879 | 8.43 | 37500 | 0.0821 | 0.8350 | 0.8329 |
| 0.0875 | 9.0 | 40000 | 0.0814 | 0.8362 | 0.8342 |
| 0.0865 | 9.56 | 42500 | 0.0811 | 0.8370 | 0.8348 |
| 0.0855 | 10.12 | 45000 | 0.0806 | 0.8375 | 0.8355 |
| 0.0853 | 10.68 | 47500 | 0.0798 | 0.8386 | 0.8367 |
| 0.0845 | 11.25 | 50000 | 0.0799 | 0.8392 | 0.8372 |
| 0.0844 | 11.81 | 52500 | 0.0793 | 0.8401 | 0.8383 |
| 0.0838 | 12.37 | 55000 | 0.0793 | 0.8402 | 0.8381 |
| 0.0834 | 12.93 | 57500 | 0.0790 | 0.8410 | 0.8390 |
| 0.0832 | 13.5 | 60000 | 0.0788 | 0.8414 | 0.8394 |
| 0.083 | 14.06 | 62500 | 0.0787 | 0.8415 | 0.8395 |
| 0.0828 | 14.62 | 65000 | 0.0787 | 0.8416 | 0.8396 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
CleveGreen/JobClassifier | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-08-10T12:35:45Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -152.01 +/- 37.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'workRL/testppo'
'batch_size': 512
'minibatch_size': 128}
```
|
CleveGreen/JobClassifier_v2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_6_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_6_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6214
- F1: 0.8352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.4174 | 0.7980 |
| 0.4661 | 2.0 | 580 | 0.4118 | 0.8142 |
| 0.4661 | 3.0 | 870 | 0.5152 | 0.8331 |
| 0.2714 | 4.0 | 1160 | 0.6901 | 0.8242 |
| 0.2714 | 5.0 | 1450 | 0.6853 | 0.8451 |
| 0.1542 | 6.0 | 1740 | 0.8570 | 0.8399 |
| 0.0935 | 7.0 | 2030 | 1.1342 | 0.8401 |
| 0.0935 | 8.0 | 2320 | 1.1763 | 0.8397 |
| 0.037 | 9.0 | 2610 | 1.3530 | 0.8215 |
| 0.037 | 10.0 | 2900 | 1.3826 | 0.8402 |
| 0.0351 | 11.0 | 3190 | 1.4057 | 0.8374 |
| 0.0351 | 12.0 | 3480 | 1.4259 | 0.8455 |
| 0.0159 | 13.0 | 3770 | 1.4270 | 0.8431 |
| 0.0249 | 14.0 | 4060 | 1.4215 | 0.8442 |
| 0.0249 | 15.0 | 4350 | 1.4245 | 0.8408 |
| 0.0197 | 16.0 | 4640 | 1.4171 | 0.8353 |
| 0.0197 | 17.0 | 4930 | 1.4537 | 0.8383 |
| 0.0137 | 18.0 | 5220 | 1.4786 | 0.8430 |
| 0.0068 | 19.0 | 5510 | 1.5635 | 0.8443 |
| 0.0068 | 20.0 | 5800 | 1.5527 | 0.8378 |
| 0.0062 | 21.0 | 6090 | 1.5917 | 0.8460 |
| 0.0062 | 22.0 | 6380 | 1.6317 | 0.8318 |
| 0.005 | 23.0 | 6670 | 1.6226 | 0.8340 |
| 0.005 | 24.0 | 6960 | 1.6378 | 0.8310 |
| 0.007 | 25.0 | 7250 | 1.6214 | 0.8352 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CleveGreen/JobClassifier_v2_gpt | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2022-08-10T12:40:54Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
The **Stable-Diffusion-v-1-2** checkpoint was initialized with the weights of the [Stable-Diffusion-v-1-1](https:/steps/huggingface.co/CompVis/stable-diffusion-v-1-1-original)
checkpoint and subsequently fine-tuned on 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`.
For more information, please refer to [Training](#training).
#### Download the weights
- [sd-v1-2.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original/resolve/main/sd-v1-2.ckpt)
- [sd-v1-2-full-ema.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original/resolve/main/sd-v1-2-full-ema.ckpt)
This weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the D🧨iffusers library, [come here](https://huggingface.co/CompVis/stable-diffusion-v1-2).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`,
which were trained as follows,
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Clint/clinton | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
library_name: "stable-diffusion"
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# Stable Diffusion v1 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
The **Stable-Diffusion-v-1-3** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v-1-2-original)
checkpoint and subsequently fine-tuned on 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
For more information, please refer to [Training](#training).
#### Download the weights
- [sd-v1-3.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original/resolve/main/sd-v1-3.ckpt)
- [sd-v1-3-full-ema.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original/resolve/main/sd-v1-3-full-ema.ckpt)
This weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the D🧨iffusers library, [come here](https://huggingface.co/CompVis/stable-diffusion-v1-3).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`,
which were trained as follows,
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Cloudy/DialoGPT-CJ-large | [
"pytorch",
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-08-10T12:42:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-multilingual-cased-misogyny-sexism-decay0.05-fr-outofdomain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-misogyny-sexism-decay0.05-fr-outofdomain
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9920
- Accuracy: 0.2851
- F1: 0.1967
- Precision: 0.1124
- Recall: 0.7870
- Mae: 0.7149
- Tn: 1727
- Fp: 6043
- Fn: 207
- Tp: 765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae | Tn | Fp | Fn | Tp |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|:----:|:----:|:---:|:---:|
| 0.3603 | 1.0 | 2233 | 0.8218 | 0.3251 | 0.2021 | 0.1163 | 0.7685 | 0.6749 | 2095 | 5675 | 225 | 747 |
| 0.298 | 2.0 | 4466 | 0.9031 | 0.3164 | 0.2047 | 0.1175 | 0.7912 | 0.6836 | 1997 | 5773 | 203 | 769 |
| 0.2438 | 3.0 | 6699 | 0.9920 | 0.2851 | 0.1967 | 0.1124 | 0.7870 | 0.7149 | 1727 | 6043 | 207 | 765 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ClydeWasTaken/DialoGPT-small-joshua | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099-0.2
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.5783
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099-0.2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8513
- Bleu: 7.5783
- Gen Len: 45.037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CoShin/XLM-roberta-large_ko_en_nil_sts | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T13:02:46Z | ---
datasets:
- relbert/conceptnet_high_confidence
model-index:
- name: relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8548412698412698
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5962566844919787
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5875370919881305
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7937743190661478
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6447368421052632
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6805555555555556
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9159258701220431
- name: F1 (macro)
type: f1_macro
value: 0.9120976005401666
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8685446009389672
- name: F1 (macro)
type: f1_macro
value: 0.7131242903396904
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6722643553629469
- name: F1 (macro)
type: f1_macro
value: 0.6696626067611262
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9610488975446895
- name: F1 (macro)
type: f1_macro
value: 0.8687323343385976
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.898464431212786
- name: F1 (macro)
type: f1_macro
value: 0.8946031569394925
---
# relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/conceptnet_high_confidence](https://huggingface.co/datasets/relbert/conceptnet_high_confidence).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5962566844919787
- Accuracy on SAT: 0.5875370919881305
- Accuracy on BATS: 0.7937743190661478
- Accuracy on U2: 0.6447368421052632
- Accuracy on U4: 0.6805555555555556
- Accuracy on Google: 0.926
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9159258701220431
- Micro F1 score on CogALexV: 0.8685446009389672
- Micro F1 score on EVALution: 0.6722643553629469
- Micro F1 score on K&H+N: 0.9610488975446895
- Micro F1 score on ROOT09: 0.898464431212786
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8548412698412698
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/conceptnet_high_confidence
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>
- loss_function: nce_logout
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 145
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-d-nce/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
CoachCarter/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099-0.25
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 7.611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099-0.25
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8387
- Bleu: 7.611
- Gen Len: 44.8304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CodeDanCode/CartmenBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
inference: false
---
# Stable Diffusion
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
This model card gives an overview of all available model checkpoints. For more in-detail model cards, please have a look at the model repositories listed under [Model Access](#model-access).
## Stable Diffusion Version 1
For the first version 4 model checkpoints are released.
*Higher* versions have been trained for longer and are thus usually better in terms of image generation quality then *lower* versions. More specifically:
- **stable-diffusion-v1-1**: The checkpoint is randomly initialized and has been trained on 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- **stable-diffusion-v1-2**: The checkpoint resumed training from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- **stable-diffusion-v1-3**: The checkpoint resumed training from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598)
- **stable-diffusion-v1-4**: The checkpoint resumed training from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
### Model Access
Each checkpoint can be used both with Hugging Face's [ 🧨 Diffusers library](https://github.com/huggingface/diffusers) or the original [Stable Diffusion GitHub repository](https://github.com/CompVis/stable-diffusion). Note that you have to *"click-request"* them on each respective model repository.
| **[🤗's 🧨 Diffusers library](https://github.com/huggingface/diffusers)** | **[Stable Diffusion GitHub repository](https://github.com/CompVis/stable-diffusion)** |
| ----------- | ----------- |
| [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1) | [`stable-diffusion-v-1-1-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original) |
| [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2) | [`stable-diffusion-v-1-2-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original) |
| [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3) | [`stable-diffusion-v-1-3-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) |
| [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) | [`stable-diffusion-v-1-4-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) |
### Demo
To quickly try out the model, you can try out the [Stable Diffusion Space](https://huggingface.co/spaces/stabilityai/stable-diffusion).
### License
[The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
CodeDanCode/SP-KyleBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-08-10T13:10:01Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: mojtaba767/bert-base-parsbert-uncased-finetuned-imdb-m
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mojtaba767/bert-base-parsbert-uncased-finetuned-imdb-m
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8764
- Validation Loss: 2.7682
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -968, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8764 | 2.7682 | 0 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CodeMonkey98/distilroberta-base-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_7_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_7_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7774
- F1: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.4189 | 0.7903 |
| 0.432 | 2.0 | 576 | 0.3927 | 0.8045 |
| 0.432 | 3.0 | 864 | 0.4868 | 0.8108 |
| 0.2573 | 4.0 | 1152 | 0.6763 | 0.8019 |
| 0.2573 | 5.0 | 1440 | 0.8132 | 0.8105 |
| 0.1612 | 6.0 | 1728 | 0.8544 | 0.8086 |
| 0.0972 | 7.0 | 2016 | 1.1274 | 0.8109 |
| 0.0972 | 8.0 | 2304 | 1.2622 | 0.8056 |
| 0.0515 | 9.0 | 2592 | 1.3398 | 0.8013 |
| 0.0515 | 10.0 | 2880 | 1.5421 | 0.8082 |
| 0.0244 | 11.0 | 3168 | 1.4931 | 0.8042 |
| 0.0244 | 12.0 | 3456 | 1.5744 | 0.8045 |
| 0.0287 | 13.0 | 3744 | 1.4169 | 0.8091 |
| 0.0255 | 14.0 | 4032 | 1.5790 | 0.7999 |
| 0.0255 | 15.0 | 4320 | 1.6094 | 0.7994 |
| 0.0098 | 16.0 | 4608 | 1.5758 | 0.8006 |
| 0.0098 | 17.0 | 4896 | 1.5326 | 0.8140 |
| 0.0203 | 18.0 | 5184 | 1.6431 | 0.8114 |
| 0.0203 | 19.0 | 5472 | 1.7105 | 0.8072 |
| 0.0104 | 20.0 | 5760 | 1.6353 | 0.8139 |
| 0.0062 | 21.0 | 6048 | 1.6762 | 0.8108 |
| 0.0062 | 22.0 | 6336 | 1.7076 | 0.8106 |
| 0.0088 | 23.0 | 6624 | 1.7887 | 0.8035 |
| 0.0088 | 24.0 | 6912 | 1.7731 | 0.8099 |
| 0.0026 | 25.0 | 7200 | 1.7774 | 0.8111 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CodeNinja1126/test-model | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_8_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_8_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5333
- F1: 0.8407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 290 | 0.3866 | 0.8172 |
| 0.4299 | 2.0 | 580 | 0.4215 | 0.8246 |
| 0.4299 | 3.0 | 870 | 0.4765 | 0.8238 |
| 0.2564 | 4.0 | 1160 | 0.7283 | 0.8350 |
| 0.2564 | 5.0 | 1450 | 0.6825 | 0.8363 |
| 0.1553 | 6.0 | 1740 | 0.9637 | 0.8339 |
| 0.0893 | 7.0 | 2030 | 1.1392 | 0.8239 |
| 0.0893 | 8.0 | 2320 | 1.1868 | 0.8231 |
| 0.0538 | 9.0 | 2610 | 1.2180 | 0.8346 |
| 0.0538 | 10.0 | 2900 | 1.2353 | 0.8253 |
| 0.0386 | 11.0 | 3190 | 1.1883 | 0.8317 |
| 0.0386 | 12.0 | 3480 | 1.2786 | 0.8375 |
| 0.0289 | 13.0 | 3770 | 1.3725 | 0.8375 |
| 0.0146 | 14.0 | 4060 | 1.3171 | 0.8463 |
| 0.0146 | 15.0 | 4350 | 1.2323 | 0.8425 |
| 0.0182 | 16.0 | 4640 | 1.3169 | 0.8485 |
| 0.0182 | 17.0 | 4930 | 1.4424 | 0.8336 |
| 0.0125 | 18.0 | 5220 | 1.4336 | 0.8385 |
| 0.0102 | 19.0 | 5510 | 1.4888 | 0.8405 |
| 0.0102 | 20.0 | 5800 | 1.5227 | 0.8419 |
| 0.0035 | 21.0 | 6090 | 1.4994 | 0.8421 |
| 0.0035 | 22.0 | 6380 | 1.4845 | 0.8424 |
| 0.0047 | 23.0 | 6670 | 1.5006 | 0.8422 |
| 0.0047 | 24.0 | 6960 | 1.5468 | 0.8422 |
| 0.0042 | 25.0 | 7250 | 1.5333 | 0.8407 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CodeNinja1126/xlm-roberta-large-kor-mrc | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-08-10T13:46:48Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/cats
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-flowers-128-2
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/cats` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/rdruce/ddpm-flowers-128-2/tensorboard?#scalars)
|
CoderBoy432/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-08-10T13:55:41Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# COS_TAPT_n_RoBERTa_STS
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Kyleiwaniec/COS_TAPT_n_RoBERTa_STS')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 792 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 317,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CoderEFE/DialoGPT-marxbot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"has_space"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-08-10T14:11:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4313
- Wer: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0055 | 3.67 | 400 | 0.7015 | 0.6789 |
| 0.4384 | 7.34 | 800 | 0.4827 | 0.4875 |
| 0.2143 | 11.01 | 1200 | 0.4672 | 0.4554 |
| 0.1431 | 14.68 | 1600 | 0.4331 | 0.4014 |
| 0.1053 | 18.35 | 2000 | 0.4471 | 0.3822 |
| 0.0857 | 22.02 | 2400 | 0.4324 | 0.3637 |
| 0.0683 | 25.69 | 2800 | 0.4305 | 0.3423 |
| 0.0526 | 29.36 | 3200 | 0.4313 | 0.3336 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Venkatakrishnan-Ramesh/Text_gen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T14:12:23Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_9_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_9_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7204
- F1: 0.8203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 291 | 0.4045 | 0.8001 |
| 0.4262 | 2.0 | 582 | 0.3914 | 0.8297 |
| 0.4262 | 3.0 | 873 | 0.5050 | 0.8029 |
| 0.2488 | 4.0 | 1164 | 0.7681 | 0.8007 |
| 0.2488 | 5.0 | 1455 | 0.8349 | 0.8262 |
| 0.1483 | 6.0 | 1746 | 0.9045 | 0.8220 |
| 0.0894 | 7.0 | 2037 | 1.1584 | 0.8165 |
| 0.0894 | 8.0 | 2328 | 1.1818 | 0.8300 |
| 0.0389 | 9.0 | 2619 | 1.3332 | 0.8147 |
| 0.0389 | 10.0 | 2910 | 1.2373 | 0.8285 |
| 0.038 | 11.0 | 3201 | 1.3156 | 0.8234 |
| 0.038 | 12.0 | 3492 | 1.3251 | 0.8341 |
| 0.0211 | 13.0 | 3783 | 1.3144 | 0.8255 |
| 0.0158 | 14.0 | 4074 | 1.5686 | 0.8168 |
| 0.0158 | 15.0 | 4365 | 1.5382 | 0.8185 |
| 0.0165 | 16.0 | 4656 | 1.5203 | 0.8282 |
| 0.0165 | 17.0 | 4947 | 1.5352 | 0.8136 |
| 0.0142 | 18.0 | 5238 | 1.4799 | 0.8243 |
| 0.0062 | 19.0 | 5529 | 1.5030 | 0.8294 |
| 0.0062 | 20.0 | 5820 | 1.6264 | 0.8094 |
| 0.0078 | 21.0 | 6111 | 1.6949 | 0.8122 |
| 0.0078 | 22.0 | 6402 | 1.7106 | 0.8139 |
| 0.0043 | 23.0 | 6693 | 1.7234 | 0.8218 |
| 0.0043 | 24.0 | 6984 | 1.7344 | 0.8208 |
| 0.0028 | 25.0 | 7275 | 1.7204 | 0.8203 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CoffeeAddict93/gpt1-call-of-the-wild | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 3
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CoffeeAddict93/gpt1-modest-proposal | [
"pytorch",
"openai-gpt",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"OpenAIGPTLMHeadModel"
],
"model_type": "openai-gpt",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-08-10T14:24:18Z | ---
license: mit
language: en
---
# T5(v1.1)-SLED (SLiding-Encoder and Decoder, large-sized model)
SLED models use pretrained, short-range encoder-decoder models, and apply them over
long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder
## Model description
This SLED model is based on the T5(V1.1) model, which is described in its [model card](https://huggingface.co/google/t5-v1_1-large).
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the T5 model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
T5 v1.1 includes several improvments on top of the original checkpoint. see its card for details
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
### How to use
To use the model, you first need to install `py-sled` in your environment (or clone the code from the [official repository](https://github.com/Mivg/SLED/blob/main/README.md))
```
pip install py-sled
```
For more installation instructions, see [here](https://github.com/Mivg/SLED#Installation).
Once installed, SLED is fully compatible with HuggingFace's AutoClasses (AutoTokenizer, AutoConfig, AutoModel
and AutoModelForCausalLM) and can be loaded using the from_pretrained methods
```python
import sled # *** required so that SledModels will be registered for the AutoClasses ***
model = AutoModel.from_pretrained('tau/t5-v1_1-large-sled')
```
Here is how to use this model in PyTorch:
```python
from sled import SledTokenizer, SledModel
tokenizer = SledTokenizer.from_pretrained('tau/t5-v1_1-large-sled')
model = SledModel.from_pretrained('tau/t5-v1_1-large-sled')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
You can also replace SledModel by SledModelForConditionalGeneration for Seq2Seq generation
```python
model = SledModelForConditionalGeneration.from_pretrained('tau/t5-v1_1-large-sled')
```
In case you wish to apply SLED on a task containing a prefix (e.g. question) which should be given as a context to
every chunk, you can pass the `prefix_length` tensor input as well (A LongTensor in the length of the batch size).
```python
import torch
import sled # *** required so that SledModels will be registered for the AutoClasses ***
tokenizer = AutoTokenizer.from_pretrained('tau/t5-v1_1-large-sled')
model = AutoModel.from_pretrained('tau/t5-v1_1-large-sled')
document_input_ids = tokenizer("Dogs are great for you.", return_tensors="pt").input_ids
prefix_input_ids = tokenizer("Are dogs good for you?", return_tensors="pt").input_ids
input_ids = torch.cat((prefix_input_ids, document_input_ids), dim=-1)
attention_mask = torch.ones_like(input_ids)
prefix_length = torch.LongTensor([[prefix_input_ids.size(1)]])
outputs = model(input_ids=input_ids, attention_mask=attention_mask, prefix_length=prefix_length)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the T5 [paper](https://arxiv.org/pdf/1910.10683.pdf) by Raffel et al
```bibtex
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
``` |
CoffeeAddict93/gpt2-medium-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ja
datasets:
- csj
tags:
- audio
- automatic-speech-recognition
license: cc-by-nc-4.0
---
### Usage
```python
#!pip install transformers==4.17.0
#!pip install https://github.com/kpu/kenlm/archive/master.zip
#!pip install pyctcdecode==0.4.0
from transformers.file_utils import cached_path, hf_bucket_url
from importlib.machinery import SourceFileLoader
from transformers import Wav2Vec2ProcessorWithLM
from IPython.lib.display import Audio
import torchaudio
import torch
# Load model & processor
model_name = "nguyenvulebinh/wav2vec2-base-ja"
model = SourceFileLoader("model", cached_path(hf_bucket_url(model_name,filename="model_handling.py"))).load_module().Wav2Vec2ForCTC.from_pretrained(model_name)
processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name)
# Load an example audio (16k)
audio, sample_rate = torchaudio.load(cached_path(hf_bucket_url(model_name, filename="sample.wav")))
input_data = processor.feature_extractor(audio[0], sampling_rate=16000, return_tensors='pt')
# Infer
output = model(**input_data)
# Output transcript without LM
print(processor.tokenizer.decode(output.logits.argmax(dim=-1)[0].detach().cpu().numpy()))
# Output transcript with LM
print(processor.decode(output.logits.cpu().detach().numpy()[0], beam_width=100).text)
```
### Model Parameters License
The ASR model parameters are made available for non-commercial use only, under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. You can find details at: https://creativecommons.org/licenses/by-nc/4.0/legalcode
### Contact
[email protected]
[](https://twitter.com/intent/follow?screen_name=nguyenvulebinh) |
CoffeeAddict93/gpt2-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-08-10T14:27:25Z | ---
license: mit
language: en
---
# BART-SLED (SLiding-Encoder and Decoder, large-sized model)
SLED models use pretrained, short-range encoder-decoder models, and apply them over
long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder
## Model description
This SLED model is based on the BART model, which is described in its [model card](https://huggingface.co/facebook/bart-large).
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works
well for comprehension tasks (e.g. text classification, question answering). When used as a BART-SLED model, it can be applied on long text tasks.
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
### How to use
To use the model, you first need to install `py-sled` in your environment (or clone the code from the [official repository](https://github.com/Mivg/SLED/blob/main/README.md))
```
pip install py-sled
```
For more installation instructions, see [here](https://github.com/Mivg/SLED#Installation).
Once installed, SLED is fully compatible with HuggingFace's AutoClasses (AutoTokenizer, AutoConfig, AutoModel
and AutoModelForCausalLM) and can be loaded using the from_pretrained methods
```python
import sled # *** required so that SledModels will be registered for the AutoClasses ***
model = AutoModel.from_pretrained('tau/bart-large-sled')
```
Here is how to use this model in PyTorch:
```python
from sled import SledTokenizer, SledModel
tokenizer = SledTokenizer.from_pretrained('tau/bart-large-sled')
model = SledModel.from_pretrained('tau/bart-large-sled')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
You can also replace SledModel by SledModelForConditionalGeneration for Seq2Seq generation
```python
model = SledModelForConditionalGeneration.from_pretrained('tau/bart-large-sled')
```
In case you wish to apply SLED on a task containing a prefix (e.g. question) which should be given as a context to
every chunk, you can pass the `prefix_length` tensor input as well (A LongTensor in the length of the batch size).
```python
import torch
import sled # *** required so that SledModels will be registered for the AutoClasses ***
tokenizer = AutoTokenizer.from_pretrained('tau/bart-large-sled')
model = AutoModel.from_pretrained('tau/bart-large-sled')
document_input_ids = tokenizer("Dogs are great for you.", return_tensors="pt").input_ids
prefix_input_ids = tokenizer("Are dogs good for you?", return_tensors="pt").input_ids
input_ids = torch.cat((prefix_input_ids, document_input_ids), dim=-1)
attention_mask = torch.ones_like(input_ids)
prefix_length = torch.LongTensor([[prefix_input_ids.size(1)]])
outputs = model(input_ids=input_ids, attention_mask=attention_mask, prefix_length=prefix_length)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the BART [paper](https://arxiv.org/abs/1910.13461) by Lewis et al
```bibtex
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
CohleM/bert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8346456692913387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CohleM/mbert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T14:40:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews_ubuntu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews_ubuntu
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9573
- Accuracy: 0.5726
- F1: [0.62998761 0.45096564 0.49037037 0.55640244 0.73547094]
- Precision: [0.62334478 0.45704118 0.47534706 0.5858748 0.72102161]
- Recall: [0.63677355 0.4450495 0.5063743 0.52975327 0.75051125]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.9617 | 1.0 | 2813 | 0.9573 | 0.5726 | [0.62998761 0.45096564 0.49037037 0.55640244 0.73547094] | [0.62334478 0.45704118 0.47534706 0.5858748 0.72102161] | [0.63677355 0.4450495 0.5063743 0.52975327 0.75051125] |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Coldestadam/Breakout_Mentors_SpongeBob_Model | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlnet-base-cased_fold_10_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased_fold_10_binary_v1
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7782
- F1: 0.8137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3796 | 0.8145 |
| 0.4196 | 2.0 | 576 | 0.4319 | 0.7810 |
| 0.4196 | 3.0 | 864 | 0.6227 | 0.8002 |
| 0.231 | 4.0 | 1152 | 0.6258 | 0.7941 |
| 0.231 | 5.0 | 1440 | 1.0692 | 0.7866 |
| 0.1307 | 6.0 | 1728 | 1.1257 | 0.8005 |
| 0.0756 | 7.0 | 2016 | 1.2283 | 0.8072 |
| 0.0756 | 8.0 | 2304 | 1.3407 | 0.8061 |
| 0.0486 | 9.0 | 2592 | 1.5232 | 0.8059 |
| 0.0486 | 10.0 | 2880 | 1.6731 | 0.8053 |
| 0.0339 | 11.0 | 3168 | 1.6536 | 0.8087 |
| 0.0339 | 12.0 | 3456 | 1.7526 | 0.7996 |
| 0.019 | 13.0 | 3744 | 1.6662 | 0.7909 |
| 0.0237 | 14.0 | 4032 | 1.6028 | 0.8071 |
| 0.0237 | 15.0 | 4320 | 1.7627 | 0.7964 |
| 0.0078 | 16.0 | 4608 | 1.6513 | 0.8169 |
| 0.0078 | 17.0 | 4896 | 1.7795 | 0.8039 |
| 0.015 | 18.0 | 5184 | 1.8669 | 0.7935 |
| 0.015 | 19.0 | 5472 | 1.6288 | 0.8118 |
| 0.0124 | 20.0 | 5760 | 1.6630 | 0.8104 |
| 0.004 | 21.0 | 6048 | 1.7418 | 0.8167 |
| 0.004 | 22.0 | 6336 | 1.7651 | 0.8128 |
| 0.0043 | 23.0 | 6624 | 1.7279 | 0.8163 |
| 0.0043 | 24.0 | 6912 | 1.8177 | 0.8093 |
| 0.004 | 25.0 | 7200 | 1.7782 | 0.8137 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ComCom/gpt2 | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-08-10T14:58:00Z | ---
license: apache-2.0
---
# OFA-Base-Caption
This is the official checkpoint (adaptive to the official code instead of Huggingface Transformers) of OFA-Base finetuned on the MSCOCO Caption dataset for image captioning. Specifically, the model was first trained with cross-entropy loss and then with CIDEr optimization.
For more information, please refer to the official github ([https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA))
Temporarily, we only provide the finetuned checkpoints based on the official code. |
ComCom-Dev/gpt2-bible-test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8124233755619126
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Cometasonmi451/Mine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T15:02:17Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-wikitext2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6689 | 1.0 | 300 | 1.5518 |
| 1.7525 | 2.0 | 600 | 1.5078 |
| 1.5267 | 3.0 | 900 | 1.4971 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
cometrain/neurotitle-rugpt3-small | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Connorvr/BrightBot-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T15:27:43Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1203 | 1.0 | 145 | 3.7695 |
| 3.9141 | 2.0 | 290 | 3.6953 |
| 3.9057 | 3.0 | 435 | 3.6777 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Connorvr/TeachingGen | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ConstellationBoi/Oop | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T15:58:55Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- PhucLe/autotrain-data-LRO-tratify-data
co2_eq_emissions:
emissions: 2.223269909428516
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1237947025
- CO2 Emissions (in grams): 2.2233
## Validation Metrics
- Loss: 0.392
- Accuracy: 0.869
- Macro F1: 0.868
- Micro F1: 0.869
- Weighted F1: 0.868
- Macro Precision: 0.871
- Micro Precision: 0.869
- Weighted Precision: 0.871
- Macro Recall: 0.869
- Micro Recall: 0.869
- Weighted Recall: 0.869
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/PhucLe/autotrain-LRO-tratify-data-1237947025
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("PhucLe/autotrain-LRO-tratify-data-1237947025", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("PhucLe/autotrain-LRO-tratify-data-1237947025", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Contrastive-Tension/BERT-Base-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tuto-bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9380178985747432
- name: Recall
type: recall
value: 0.9525412319084483
- name: F1
type: f1
value: 0.9452237808951236
- name: Accuracy
type: accuracy
value: 0.9866809913463237
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tuto-bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0827
- Precision: 0.9380
- Recall: 0.9525
- F1: 0.9452
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0218 | 1.0 | 1756 | 0.0714 | 0.9372 | 0.9524 | 0.9447 | 0.9862 |
| 0.0123 | 2.0 | 3512 | 0.0761 | 0.9347 | 0.9510 | 0.9428 | 0.9859 |
| 0.0063 | 3.0 | 5268 | 0.0827 | 0.9380 | 0.9525 | 0.9452 | 0.9867 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
Contrastive-Tension/BERT-Base-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | 2022-08-10T16:08:27Z | ---
library_name: sklearn
tags:
- sklearn
- tabular-classification
widget:
structuredData:
area error:
- 30.29
- 96.05
- 48.31
compactness error:
- 0.01911
- 0.01652
- 0.01484
concave points error:
- 0.01037
- 0.0137
- 0.01093
concavity error:
- 0.02701
- 0.02269
- 0.02813
fractal dimension error:
- 0.003586
- 0.001698
- 0.002461
mean area:
- 481.9
- 1130.0
- 748.9
mean compactness:
- 0.1058
- 0.1029
- 0.1223
mean concave points:
- 0.03821
- 0.07951
- 0.08087
mean concavity:
- 0.08005
- 0.108
- 0.1466
mean fractal dimension:
- 0.06373
- 0.05461
- 0.05796
mean perimeter:
- 81.09
- 123.6
- 101.7
mean radius:
- 12.47
- 18.94
- 15.46
mean smoothness:
- 0.09965
- 0.09009
- 0.1092
mean symmetry:
- 0.1925
- 0.1582
- 0.1931
mean texture:
- 18.6
- 21.31
- 19.48
perimeter error:
- 2.497
- 5.486
- 3.094
radius error:
- 0.3961
- 0.7888
- 0.4743
smoothness error:
- 0.006953
- 0.004444
- 0.00624
symmetry error:
- 0.01782
- 0.01386
- 0.01397
texture error:
- 1.044
- 0.7975
- 0.7859
worst area:
- 677.9
- 1866.0
- 1156.0
worst compactness:
- 0.2378
- 0.2336
- 0.2394
worst concave points:
- 0.1015
- 0.1789
- 0.1514
worst concavity:
- 0.2671
- 0.2687
- 0.3791
worst fractal dimension:
- 0.0875
- 0.06589
- 0.08019
worst perimeter:
- 96.05
- 165.9
- 124.9
worst radius:
- 14.97
- 24.86
- 19.26
worst smoothness:
- 0.1426
- 0.1193
- 0.1546
worst symmetry:
- 0.3014
- 0.2551
- 0.2837
worst texture:
- 24.64
- 26.58
- 26.0
---
# Model description
This is a DecisionTreeClassifier model trained on breast cancer dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 {color: black;background-color: white;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 pre{padding: 0;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-toggleable {background-color: white;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-estimator:hover {background-color: #d4ebff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-item {z-index: 1;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-parallel-item:only-child::after {width: 0;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741 div.sk-text-repr-fallback {display: none;}</style><div id="sk-7c3e7180-7d07-45af-b2c4-4682b6ba8741" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f10c71c1-4f35-46d5-b90a-c6e06005a09c" type="checkbox" checked><label for="f10c71c1-4f35-46d5-b90a-c6e06005a09c" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.935673 |
| f1 score | 0.935673 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
Confusion Matrix

|
Contrastive-Tension/BERT-Base-Swe-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 126 | 2022-08-10T16:26:55Z | language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
thumbnail: "///"
tags:
- Conversational
- Conversational
license: "any valid license identifier"
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2 |
Contrastive-Tension/BERT-Distil-CT-STSb | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data_nlp
metrics:
- precision
- recall
- f1
model-index:
- name: sd-smallmol-roles-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data_nlp
type: source_data_nlp
args: SMALL_MOL_ROLES
metrics:
- name: Precision
type: precision
value: 0.9628394473558838
- name: Recall
type: recall
value: 0.9716346153846154
- name: F1
type: f1
value: 0.9672170375687963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-smallmol-roles-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy Score: 0.9995
- Precision: 0.9628
- Recall: 0.9716
- F1: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0013 | 1.0 | 1569 | 0.0015 | 0.9995 | 0.9628 | 0.9716 | 0.9672 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
|
Contrastive-Tension/BERT-Distil-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-08-10T16:42:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-bert-yoga-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-bert-yoga-finetuned
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4202 | 1.0 | 235 | 2.1511 |
| 2.1798 | 2.0 | 470 | 2.0707 |
| 2.1428 | 3.0 | 705 | 2.0810 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Contrastive-Tension/BERT-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T16:52:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9265405847311663
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8401 | 1.0 | 250 | 0.3144 | 0.9085 | 0.9058 |
| 0.2524 | 2.0 | 500 | 0.2133 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Contrastive-Tension/BERT-Large-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-08-10T17:08:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-cvs-estimation-years-experience
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-cvs-estimation-years-experience
This model is a fine-tuned version of [jhonparra18/bert-base-cased-cv-studio_name-medium](https://huggingface.co/jhonparra18/bert-base-cased-cv-studio_name-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4494
- Mse: 9.4494
- Mae: 2.0686
- R2: 0.4131
- Accuracy: 0.2586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:--------:|
| No log | 10.34 | 300 | 10.5131 | 10.5131 | 2.2140 | 0.3470 | 0.2759 |
| 3.3802 | 20.69 | 600 | 9.1915 | 9.1915 | 2.0780 | 0.4291 | 0.2759 |
| 3.3802 | 31.03 | 900 | 8.8261 | 8.8261 | 1.9359 | 0.4518 | 0.2931 |
| 0.1613 | 41.38 | 1200 | 9.4494 | 9.4494 | 2.0686 | 0.4131 | 0.2586 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Coolhand/Abuela | [
"en",
"image_restoration",
"superresolution",
"license:mit"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T17:36:23Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: 4-way-detection-prop-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4-way-detection-prop-16
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Corvus/DialoGPT-medium-CaptainPrice | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T17:56:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 273.70 +/- 23.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CouchCat/ma_mlc_v7_distil | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"multi-label",
"license:mit"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | 2022-08-10T18:14:51Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
language:
- fa
widget:
- text: 'در روزهای گذشته انتشار تصاویر کودکان و نوجوانانی که از والدینشان جدا شده و در اردوگاههای موقت در ایالتهای مرزی آمریکا نگهداری میشوند، انتقادات گستردهای را در داخل و خارج آمریکا از سیاست ضد مهاجرتی ترامپ، برانگیخته است. به گزارش این اعتراضات به حدی است که حتی "ملانیا ترامپ" بانوی اول آمریکا نیز نتوانست از آن دفاع کند و این اقدام را محکوم کرد. ماجرا از این قرار است که در یک ماه گذشته دولت آمریکا با ارایه تفسیر موسعی از قانون مهاجرت به آمریکا بیش از 2200 فرزند را از والدین مهاجر آنها جدا کرد. بر اساس این تفسیر از قانون ورود غیرقانونی به خاک ایالات متحده آمریکا جرم محسوب میشود و به همین خاطر افرادی که به صورت غیرقانونی وارد خاک آمریکا شدهاند برای محاکمه دستگیر میشوند و فرزندانشان از آنها جدا میشوند. این جداسازی و انتشار تصاویری از صدها کودک و نوجوان و حتی فرزندان خردسال زیر 2 سال که از والدین خود جدا شده اند صدای بسیاری را در آمریکا و جهان درآورده است. گفتنی است جداسازی والدین و فرزندان بر مبنای قانون جدیدی انجام نمیشود بلکه دولت ترامپ تلاش دارد قانونی را که در دورههای گذشته نسبت به آن اغماض میشد، "سفت و سخت" به مورد اجرا بگذارد؛ تنها تغییری که دولت ترامپ نسبت به دولت اوباما درباره قانون دارد، "تفسیر موسع" آن از "وقوع جرم" از سوی مهاجران غیرقانونی است، بدین گونه که دولت ترامپ نفس ورود غیرقانونی به خاک آمریکا را جرم انگاشته و مهاجران را برای محاکمه و اخراج دستگیر میکند اما در دولتهای گذشته نسبت به این ورود با اغماض بیشتری برخورد میشد و تنها در صورتی که مهاجرغیرقانونی اقدامی مجرمانه را در خاک آمریکا مرتکب میشد، نسبت به دستگیری و اخراج فرد مزبور اقدام میشد. دموکراتها این اقدام دولت ترامپ را غیراخلاقی و "شیطانی" توصیف کردهاند و حتی "لورا بوش" همسر "جورج دبلیو بوش" رییس جمهور اسبق آمریکا با اعلام انزجار از این اقدام، گفته طاقت دیدن صحنه ضجه و گریه کودکان خردسال پس از جدایی آنها از والدینشان را ندارد. این اعتراضات در حالی است که ترامپ از این اقدام دفاع کرده و گفته راهی جز این نیست. او دیروز بار دیگر با دفاع از سیاست جدید دولت آمریکا برضد مهاجران گفت که او اجازه نخواهد داد آمریکا نیز مثل اروپا به "اردوگاه پناهجویان" تبدیل شود. در روزهای گذشته در برخی شهرهای آمریکا تظاهراتهایی بر ضد جداسازی فرزندان و والدین مهاجر برگزار شده است و فعالان اجتماعی و حقوق بشر در آمریکا به این اقبدام به شدت اعتراض کرده و خوستار توقف اجرای این طرح شدهاند. "جف سشنز" وزیر دادگستری کابینه ترامپ هم در واکنش به مقایسه این طرح با اقدامات دوره "آلمان نازی" - در جداسازی والدین از فرزندان در اردوگاههای مرگ یا کار اجباری- گفته است این طرح به هیچ وجه قابل مقایسه با اقدامات دوره آلمان نازی نیست. پس از اینکه "مایکل هایدن" رییس سابق سازمان اطلاعات مرکزی آمریکا (سیا) در توییتر خود این اقدام را با اردوگاههای آلمان نازی مقایسه کرد و به شدت آن را محکوم کرد وزیر دادگستری کابینه ترامپ دیروز در مصاحبهای با فاکسنیوز با دفاع از اجرای سختگیرانه قانون ضد مهاجرت غیرقانونی به خاک آمریکا این مقایسه را "بزرگنمایی" دانست چون به گفته او: در آلمان نازی، جلوی خروج یهودیان از کشور را میگرفتند." کنگره آمریکا قرار است در هفته جاری درباره یک قانون جدید مهاجرتی به تصمیمگیری برسد.'
- text: 'وزرای خارجه اسراییل و ایران در دومین سالگرد شهادت سردار سپهبد "قاسم سلیمانی" در توییتر جدال کردند. به گزارش ، در پی توییت اخیر "حسین امیر عبدالهیان" وزیر امور خارجه جمهوری اسلامی ایران درباره تهدیدات رژیم اسراییل به اقدام نظامی علیه ایران، "یائیر لاپید" وزیر خارجه اسراییل امروز از طریق توییتر با بازنشر توییت امیرعبدالهیان به توییت او پاسخ داد. امیر عبدالهیان دیروز در توییتی با اشاره به مصاحبه اخیر لاپید مبنی بر توانایی غیرقابل تصور اسراییل برای حمله نظامی علیه ایران نوشته بود:" اظهارات آشفته وزير خارجه رژيم جعلی اسراییل در قبال ملت بزرگ ایران، مصداق این ضرب المثل معروف ایرانیست که« شتر در خواب بیند پنبه دانه، گهی لپ لپ خورد گه دانه دانه». با اقتدار و عقلانیت از حقوق، منافع وپیشرفت ملت دفاع می کنیم. صهیونیسم جایی در آینده جهان ندارد." لاپید روز جمعه در مصاحبه ای گفته بود رژیم تل آویو توانایی هایی برای اقدام نظامی علیه ایران دارد که در مخیله هیچ کسی نمی گنجد و اگر منافع تل آویو از جانب ایران تهدید شود، قادر است به صورت یکجانبه علیه ایران اقدام کند. امروز لاپید با بازنشر توییت امیرعبدالهیان که در واکنش به اظهارات تهدید آمیز اخیر او علیه ایران نوشته بود در رشته توییتی نوشت:" رژیم افراطی ایران اسراییل را تهدید به نابودی می کند، اما همچنان در این نبرد شکست خواهد خورد. حکومت شکست خورده ایران این کشور را از درون ویران می کند. به قول شاعر ایرانی سعدی: « اصل بد نیکو نگردد زانکه بنیادش بد است. »." او در توییتی دیگر افزود:" ایرانیان باید بدانند که رژیم آنها مسبب زندگی فلاکت بار آنهاست. دولت اسراییل قوی است و اجازه نخواهد داد که شهروندانش آسیب ببینند."'
metrics:
- rouge
model-index:
- name: mt5-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-v1
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the Persian News dataset.
It achieves the following results on the evaluation set:
- Loss: 1.087988
- Rouge1: 1.2887
- Rouge2: 0.1861
- Rougel: 1.2862
- Rougelsum: 1.2818
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.223400 | 1 | 20437| 1.153162 | 1.0624 | 0.1351 | 1.0668 | 1.0740 |
| 1.202900 | 2 | 40874| 1.086163 | 1.1579 | 0.1426 | 1.1724 | 1.1599 |
| 1.173500 | 3 | 61311| 1.087988 | 1.2887 | 0.1861 | 1.2862 | 1.2818 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
CouchCat/ma_ner_v7_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2022-08-10T18:31:11Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1019713132023992320/fkvVczkz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514451221054173189/BWP3wqQj_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & Fake Showbiz News & wint but Al & Ninja Sex Party but AI & gpt up a guy(?) & MORTIMUS COWBOY: The Bastard of Diapers</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-dril9999-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & Fake Showbiz News & wint but Al & Ninja Sex Party but AI & gpt up a guy(?) & MORTIMUS COWBOY: The Bastard of Diapers.
| Data | Humongous Ape MP | Fake Showbiz News | wint but Al | Ninja Sex Party but AI | gpt up a guy(?) | MORTIMUS COWBOY: The Bastard of Diapers |
| --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3246 | 3250 | 3229 | 692 | 3250 | 3249 |
| Retweets | 198 | 1 | 47 | 13 | 16 | 0 |
| Short tweets | 609 | 1 | 57 | 44 | 10 | 142 |
| Tweets kept | 2439 | 3248 | 3125 | 635 | 3224 | 3107 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2kz7wo92/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-dril9999-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zpt8x6i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zpt8x6i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-dril9999-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
CouchCat/ma_sa_v7_distil | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"sentiment-analysis",
"license:mit"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38 | 2022-08-10T18:31:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: nlp_bert_emo_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nlp_bert_emo_classifier
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8887 | 0.25 | 500 | 0.4212 |
| 0.3216 | 0.5 | 1000 | 0.3192 |
| 0.2649 | 0.75 | 1500 | 0.2746 |
| 0.2535 | 1.0 | 2000 | 0.2573 |
| 0.163 | 1.25 | 2500 | 0.2157 |
| 0.1868 | 1.5 | 3000 | 0.2118 |
| 0.1258 | 1.75 | 3500 | 0.2319 |
| 0.1726 | 2.0 | 4000 | 0.1853 |
| 0.1035 | 2.25 | 4500 | 0.2146 |
| 0.1135 | 2.5 | 5000 | 0.2207 |
| 0.1117 | 2.75 | 5500 | 0.2496 |
| 0.1145 | 3.0 | 6000 | 0.2482 |
| 0.0726 | 3.25 | 6500 | 0.2654 |
| 0.0828 | 3.5 | 7000 | 0.2622 |
| 0.0817 | 3.75 | 7500 | 0.2775 |
| 0.0689 | 4.0 | 8000 | 0.2791 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
Coverage/sakurajimamai | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T18:54:55Z | ---
language: en
thumbnail: http://www.huggingtweets.com/apesahoy-dril-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2/1660158001400/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1019713132023992320/fkvVczkz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & Fake Showbiz News & wint & wint but Al & Ninja Sex Party but AI & gpt up a guy(?)</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-dril-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & Fake Showbiz News & wint & wint but Al & Ninja Sex Party but AI & gpt up a guy(?).
| Data | Humongous Ape MP | Fake Showbiz News | wint | wint but Al | Ninja Sex Party but AI | gpt up a guy(?) |
| --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3246 | 3250 | 3231 | 3229 | 692 | 3250 |
| Retweets | 198 | 1 | 499 | 47 | 13 | 16 |
| Short tweets | 609 | 1 | 288 | 57 | 44 | 10 |
| Tweets kept | 2439 | 3248 | 2444 | 3125 | 635 | 3224 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ocv4vat/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-dril-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gb80yim) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gb80yim/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-dril-dril_gpt2-fakeshowbiznews-gptupaguy-nsp_gpt2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Coyotl/DialoGPT-test-last-arthurmorgan | [
"conversational"
] | conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T19:48:13Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: categorization-finetuned-20220721-164940-distilled-20220810-185342
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# categorization-finetuned-20220721-164940-distilled-20220810-185342
This model is a fine-tuned version of [carted-nlp/categorization-finetuned-20220721-164940](https://huggingface.co/carted-nlp/categorization-finetuned-20220721-164940) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Accuracy: 0.87
- F1: 0.8690
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 314
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.269 | 0.56 | 2500 | 0.1280 | 0.7547 | 0.7461 |
| 0.125 | 1.12 | 5000 | 0.1052 | 0.7960 | 0.7916 |
| 0.1079 | 1.69 | 7500 | 0.0950 | 0.8132 | 0.8102 |
| 0.0992 | 2.25 | 10000 | 0.0898 | 0.8216 | 0.8188 |
| 0.0938 | 2.81 | 12500 | 0.0859 | 0.8294 | 0.8268 |
| 0.0891 | 3.37 | 15000 | 0.0828 | 0.8349 | 0.8329 |
| 0.0863 | 3.94 | 17500 | 0.0806 | 0.8391 | 0.8367 |
| 0.0834 | 4.5 | 20000 | 0.0788 | 0.8417 | 0.8400 |
| 0.081 | 5.06 | 22500 | 0.0774 | 0.8449 | 0.8430 |
| 0.0792 | 5.62 | 25000 | 0.0754 | 0.8475 | 0.8460 |
| 0.0778 | 6.19 | 27500 | 0.0749 | 0.8489 | 0.8474 |
| 0.0758 | 6.75 | 30000 | 0.0738 | 0.8517 | 0.8502 |
| 0.0745 | 7.31 | 32500 | 0.0729 | 0.8531 | 0.8519 |
| 0.0733 | 7.87 | 35000 | 0.0720 | 0.8544 | 0.8528 |
| 0.072 | 8.43 | 37500 | 0.0714 | 0.8559 | 0.8546 |
| 0.0716 | 9.0 | 40000 | 0.0707 | 0.8565 | 0.8554 |
| 0.0701 | 9.56 | 42500 | 0.0704 | 0.8574 | 0.8558 |
| 0.0693 | 10.12 | 45000 | 0.0700 | 0.8581 | 0.8569 |
| 0.0686 | 10.68 | 47500 | 0.0690 | 0.8600 | 0.8588 |
| 0.0675 | 11.25 | 50000 | 0.0690 | 0.8605 | 0.8593 |
| 0.0673 | 11.81 | 52500 | 0.0682 | 0.8614 | 0.8603 |
| 0.0663 | 12.37 | 55000 | 0.0682 | 0.8619 | 0.8606 |
| 0.0657 | 12.93 | 57500 | 0.0675 | 0.8634 | 0.8624 |
| 0.0648 | 13.5 | 60000 | 0.0674 | 0.8636 | 0.8625 |
| 0.0647 | 14.06 | 62500 | 0.0668 | 0.8644 | 0.8633 |
| 0.0638 | 14.62 | 65000 | 0.0669 | 0.8648 | 0.8635 |
| 0.0634 | 15.18 | 67500 | 0.0665 | 0.8654 | 0.8643 |
| 0.063 | 15.74 | 70000 | 0.0663 | 0.8664 | 0.8654 |
| 0.0623 | 16.31 | 72500 | 0.0662 | 0.8663 | 0.8652 |
| 0.0622 | 16.87 | 75000 | 0.0657 | 0.8669 | 0.8660 |
| 0.0615 | 17.43 | 77500 | 0.0658 | 0.8670 | 0.8660 |
| 0.0616 | 17.99 | 80000 | 0.0655 | 0.8676 | 0.8667 |
| 0.0608 | 18.56 | 82500 | 0.0653 | 0.8683 | 0.8672 |
| 0.0606 | 19.12 | 85000 | 0.0653 | 0.8679 | 0.8669 |
| 0.0602 | 19.68 | 87500 | 0.0648 | 0.8690 | 0.8680 |
| 0.0599 | 20.24 | 90000 | 0.0650 | 0.8688 | 0.8677 |
| 0.0598 | 20.81 | 92500 | 0.0647 | 0.8689 | 0.8680 |
| 0.0592 | 21.37 | 95000 | 0.0647 | 0.8692 | 0.8681 |
| 0.0591 | 21.93 | 97500 | 0.0646 | 0.8698 | 0.8688 |
| 0.0587 | 22.49 | 100000 | 0.0645 | 0.8699 | 0.8690 |
| 0.0586 | 23.05 | 102500 | 0.0644 | 0.8699 | 0.8690 |
| 0.0583 | 23.62 | 105000 | 0.0644 | 0.8699 | 0.8690 |
| 0.058 | 24.18 | 107500 | 0.0642 | 0.8703 | 0.8693 |
| 0.058 | 24.74 | 110000 | 0.0642 | 0.8704 | 0.8694 |
| 0.0578 | 25.3 | 112500 | 0.0641 | 0.8703 | 0.8693 |
| 0.0576 | 25.87 | 115000 | 0.0641 | 0.8708 | 0.8699 |
| 0.0573 | 26.43 | 117500 | 0.0641 | 0.8708 | 0.8698 |
| 0.0574 | 26.99 | 120000 | 0.0639 | 0.8711 | 0.8702 |
| 0.0571 | 27.55 | 122500 | 0.0640 | 0.8711 | 0.8701 |
| 0.0569 | 28.12 | 125000 | 0.0639 | 0.8711 | 0.8702 |
| 0.0569 | 28.68 | 127500 | 0.0639 | 0.8712 | 0.8703 |
| 0.057 | 29.24 | 130000 | 0.0639 | 0.8712 | 0.8703 |
| 0.0566 | 29.8 | 132500 | 0.0638 | 0.8713 | 0.8704 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
Coyotl/DialoGPT-test2-arthurmorgan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-08-10T19:03:09Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base_fold_1_binary_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_fold_1_binary_v1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4984
- F1: 0.8339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 288 | 0.3819 | 0.8117 |
| 0.4108 | 2.0 | 576 | 0.3696 | 0.8281 |
| 0.4108 | 3.0 | 864 | 0.4890 | 0.8343 |
| 0.2261 | 4.0 | 1152 | 0.7605 | 0.8298 |
| 0.2261 | 5.0 | 1440 | 0.7754 | 0.8307 |
| 0.1404 | 6.0 | 1728 | 0.7650 | 0.8174 |
| 0.0962 | 7.0 | 2016 | 0.8539 | 0.8315 |
| 0.0962 | 8.0 | 2304 | 1.0770 | 0.8263 |
| 0.0433 | 9.0 | 2592 | 1.1450 | 0.8292 |
| 0.0433 | 10.0 | 2880 | 1.1700 | 0.8205 |
| 0.0344 | 11.0 | 3168 | 1.2376 | 0.8241 |
| 0.0344 | 12.0 | 3456 | 1.2688 | 0.8329 |
| 0.0219 | 13.0 | 3744 | 1.3276 | 0.8283 |
| 0.0123 | 14.0 | 4032 | 1.2930 | 0.8320 |
| 0.0123 | 15.0 | 4320 | 1.4631 | 0.8266 |
| 0.0177 | 16.0 | 4608 | 1.4326 | 0.8270 |
| 0.0177 | 17.0 | 4896 | 1.4770 | 0.8334 |
| 0.0053 | 18.0 | 5184 | 1.5972 | 0.8214 |
| 0.0053 | 19.0 | 5472 | 1.5331 | 0.8327 |
| 0.0045 | 20.0 | 5760 | 1.5487 | 0.8359 |
| 0.0086 | 21.0 | 6048 | 1.4610 | 0.8315 |
| 0.0086 | 22.0 | 6336 | 1.4685 | 0.8353 |
| 0.0071 | 23.0 | 6624 | 1.4933 | 0.8358 |
| 0.0071 | 24.0 | 6912 | 1.4898 | 0.8310 |
| 0.0022 | 25.0 | 7200 | 1.4984 | 0.8339 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Craak/GJ0001 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-08-10T19:50:13Z | ---
tags:
- document-understanding
- endpoints-template
library_name: generic
---
# Deploy a Space as inference Endpoint
_This is a fork of the [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/spaces/naver-clova-ix/donut-base-finetuned-cord-v2) Space.
This repository implements a custom container for 🤗 Inference Endpoints using a Gradio space.
To deploy this model as an Inference Endpoint, you have to select Custom as task and a custom image.
* CPU image: `philschmi/gradio-api:cpu`
* GPU image: `philschmi/gradio-api:gpu`
* PORT: `7860`
* ~Health Route: `/`~-> is default
Also make sure to add `server_name="0.0.0.0"` in your `launch()` call to make sure the request is correct proxied.
If you want to use the UI with the Inference Endpoint, you have to select as endpoint type `public` and add [auth through gradio](https://gradio.app/docs/#launch-header)
### Example API Request Payload
Get an image you want to use, e.g.
```bash
!wget https://datasets-server.huggingface.co/assets/naver-clova-ix/cord-v2/--/naver-clova-ix--cord-v2/train/0/image/image.jpg
```
run inference
```python
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None):
ext = path_to_image.split('.')[-1]
prefix = f'data:image/{ext};base64,'
with open(path_to_image, 'rb') as f:
img = f.read()
payload = {"data": [prefix + base64.b64encode(img).decode('utf-8')]}
response = r.post(
f"{ENDPOINT_URL}/api/predict", headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
if response.status_code == 200:
return response.json()
else:
raise Exception(f"Error: {response.status_code}")
prediction = predict(path_to_image="image.jpg")
``` |
Craig/paraphrase-MiniLM-L6-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,026 | 2022-08-10T20:47:52Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1479595267800322048/Aqqb82wz_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1019713132023992320/fkvVczkz_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & ste 🍊 & Fake Showbiz News & Ninja Sex Party but AI & gpt up a guy(?) & waint</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-chai_ste-fakeshowbiznews-gptupaguy-nsp_gpt2-powerdril_gpt2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & ste 🍊 & Fake Showbiz News & Ninja Sex Party but AI & gpt up a guy(?) & waint.
| Data | Humongous Ape MP | ste 🍊 | Fake Showbiz News | Ninja Sex Party but AI | gpt up a guy(?) | waint |
| --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3245 | 3193 | 3250 | 692 | 3250 | 103 |
| Retweets | 196 | 302 | 1 | 13 | 16 | 11 |
| Short tweets | 609 | 488 | 1 | 44 | 10 | 2 |
| Tweets kept | 2440 | 2403 | 3248 | 635 | 3224 | 90 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2r8q1li1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-chai_ste-fakeshowbiznews-gptupaguy-nsp_gpt2-powerdril_gpt2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/e3lx58vb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/e3lx58vb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-chai_ste-fakeshowbiznews-gptupaguy-nsp_gpt2-powerdril_gpt2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Crives/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-08-10T21:54:40Z | ---
tags:
- conversational
---
# Guin DialoGPT model |
CurtisBowser/DialoGPT-medium-sora-three | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-0.4-0.25
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 3.2179
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-0.4-0.25
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8561
- Bleu: 3.2179
- Gen Len: 41.2356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DJSammy/bert-base-danish-uncased_BotXO-ai | [
"pytorch",
"jax",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"transformers",
"bert",
"masked-lm",
"license:cc-by-4.0",
"fill-mask"
] | fill-mask | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sem_eval_2018_task_1
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-english
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval_2018_task_1
type: sem_eval_2018_task_1
config: subtask5.english
split: train
args: subtask5.english
metrics:
- name: F1
type: f1
value: 0.7113731269958242
- name: Accuracy
type: accuracy
value: 0.28103837471783294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the sem_eval_2018_task_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3131
- F1: 0.7114
- Roc Auc: 0.8046
- Accuracy: 0.2810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4067 | 1.0 | 855 | 0.3205 | 0.6756 | 0.7766 | 0.2709 |
| 0.2828 | 2.0 | 1710 | 0.3062 | 0.7058 | 0.7973 | 0.3014 |
| 0.239 | 3.0 | 2565 | 0.3122 | 0.7100 | 0.8038 | 0.2810 |
| 0.2145 | 4.0 | 3420 | 0.3131 | 0.7114 | 0.8046 | 0.2810 |
| 0.1888 | 5.0 | 4275 | 0.3167 | 0.7096 | 0.8022 | 0.2844 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
alexandrainst/da-emotion-classification-base | [
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"transformers",
"license:cc-by-sa-4.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 837 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2057
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8084 | 1.0 | 250 | 0.2883 | 0.9125 | 0.9110 |
| 0.2371 | 2.0 | 500 | 0.2057 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.13.2
|
Danbi/distilroberta-base-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- bg
- mk
- multilingual
license: cc0-1.0
tags:
- BERTovski
- MaCoCu
---
# Model description
**BERTovski** is a large pre-trained language model trained on Bulgarian and Macedonian texts. It was trained from scratch using the RoBERTa architecture. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen.
BERTovski was trained on 74GB of text, which is equal to just over 7 billion tokens. It was trained for 300,000 steps with a batch size of 2,048, which was approximately 30 epochs.
The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels). We aim to train this model for even longer, so keep an eye out for newer versions!
# How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("RVN/BERTovski")
model = AutoModel.from_pretrained("RVN/BERTovski") # PyTorch
model = TFAutoModel.from_pretrained("RVN/BERTovski") # Tensorflow
```
# Data
For training, we used all Bulgarian and Macedonian data that was present in the [MaCoCu](https://macocu.eu/), Oscar, mc4 and Wikipedia corpora. In a manual analysis we found that for Oscar and mc4, if the data did not come from the corresponding domain (.bg or .mk), it was often (badly) machine translated. Therefore, we opted to only use data that originally came from a .bg or .mk domain.
After de-duplicating the data, we were left with a total of 54.5 GB of Bulgarian and 9 GB of Macedonian text. Since there was quite a bit more Bulgarian data, we simply doubled the Macedonian data during training. We trained a shared vocabulary of 32,000 pieces on a subset of the data in which the Bulgarian/Macedonian split was 50/50.
# Benchmark performance
We tested performance of BERTovski on benchmarks of XPOS, UPOS and NER. For Bulgarian, we used the data from the [Universal Dependencies](https://universaldependencies.org/) project. For Macedonian, we used the data sets created in the [babushka-bench](https://github.com/clarinsi/babushka-bench/) project. We also tested on a Google (Bulgarian) and human (Macedonian) translated version of the COPA data set (for details see our [Github repo](https://github.com/RikVN/COPA)). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large. For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for UPOS/XPOS/NER, for COPA we optimized the learning rate on the dev set.
## Bulgarian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 99.2 | 99.4 | 98.0 | 98.3 | 93.2 | 92.9 | 56.9 |
| **XLM-R-large** | 99.3 | 99.4 | 97.4 | 97.7 | 93.7 | 93.5 | 53.1 |
| **BERTovski** | 98.8 | 99.1 | 97.6 | 97.8 | 93.5 | 93.3 | 51.7 |
## Macedonian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 98.3 | 98.6 | 97.3 | 97.1 | 92.8 | 94.8 | 55.3 |
| **XLM-R-large** | 98.3 | 98.7 | 97.7 | 97.5 | 93.3 | 95.1 | 52.5 |
| **BERTovski** | 97.8 | 98.1 | 96.4 | 96.0 | 92.8 | 94.6 | 51.8 |
# Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union's Connecting Europe Facility 2014-
2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
# Citation
If you use this model, please cite the following paper:
```bibtex
@inproceedings{non-etal-2022-macocu,
title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages",
author = "Ba{\~n}{\'o}n, Marta and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Garc{\'\i}a-Romero, Cristian and
Kuzman, Taja and
Ljube{\v{s}}i{\'c}, Nikola and
van Noord, Rik and
Sempere, Leopoldo Pla and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Rupnik, Peter and
Suchomel, V{\'\i}t and
Toral, Antonio and
van der Werff, Tobias and
Zaragoza, Jaume",
booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation",
month = jun,
year = "2022",
address = "Ghent, Belgium",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2022.eamt-1.41",
pages = "303--304"
}
``` |
DataikuNLP/paraphrase-multilingual-MiniLM-L12-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,517 | null | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-0.05-1
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 6.997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-0.05-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8106
- Bleu: 6.997
- Gen Len: 46.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.