modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
AnonymousSub/declutr-model-emanuals | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- generated_from_trainer
model-index:
- name: kcbert-large-finetuned-unsmile
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-large-finetuned-unsmile
This model is a fine-tuned version of [beomi/kcbert-large](https://huggingface.co/beomi/kcbert-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1240
- Lrap: 0.8816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Lrap |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.99 | 58 | 0.2090 | 0.8098 |
| No log | 1.99 | 116 | 0.1386 | 0.8707 |
| No log | 2.99 | 174 | 0.1263 | 0.8795 |
| No log | 3.99 | 232 | 0.1232 | 0.8823 |
| No log | 4.99 | 290 | 0.1240 | 0.8816 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
widget:
- text: "PROCEDURE: Chest xray. COMPARISON: last seen on 1/1/2020 and also record dated of March 1st, 2019. FINDINGS: patchy airspace opacities. IMPRESSION: The results of the chest xray of January 1 2020 are the most concerning ones. The patient was transmitted to another service of UH Medical Center under the responsability of Dr. Perez. We used the system MedClinical data transmitter and sent the data on 2/1/2020, under the ID 5874233. We received the confirmation of Dr Perez. He is reachable at 567-493-1234."
- text: "Dr. Curt Langlotz chose to schedule a meeting on 06/23."
tags:
- token-classification
- sequence-tagger-model
- pytorch
- transformers
- pubmedbert
- uncased
- radiology
- biomedical
datasets:
- radreports
language:
- en
license: mit
---
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings.
Associated github repo: https://github.com/MIDRC/Stanford_Penn_Deidentifier
## Citation
```bibtex
@article{10.1093/jamia/ocac219,
author = {Chambon, Pierre J and Wu, Christopher and Steinkamp, Jackson M and Adleberg, Jason and Cook, Tessa S and Langlotz, Curtis P},
title = "{Automated deidentification of radiology reports combining transformer and โhide in plain sightโ rule-based methods}",
journal = {Journal of the American Medical Informatics Association},
year = {2022},
month = {11},
abstract = "{To develop an automated deidentification pipeline for radiology reports that detect protected health information (PHI) entities and replaces them with realistic surrogates โhiding in plain sight.โIn this retrospective study, 999 chest X-ray and CT reports collected between November 2019 and November 2020 were annotated for PHI at the token level and combined with 3001 X-rays and 2193 medical notes previously labeled, forming a large multi-institutional and cross-domain dataset of 6193 documents. Two radiology test sets, from a known and a new institution, as well as i2b2 2006 and 2014 test sets, served as an evaluation set to estimate model performance and to compare it with previously released deidentification tools. Several PHI detection models were developed based on different training datasets, fine-tuning approaches and data augmentation techniques, and a synthetic PHI generation algorithm. These models were compared using metrics such as precision, recall and F1 score, as well as paired samples Wilcoxon tests.Our best PHI detection model achieves 97.9 F1 score on radiology reports from a known institution, 99.6 from a new institution, 99.5 on i2b2 2006, and 98.9 on i2b2 2014. On reports from a known institution, it achieves 99.1 recall of detecting the core of each PHI span.Our model outperforms all deidentifiers it was compared to on all test sets as well as human labelers on i2b2 2014 data. It enables accurate and automatic deidentification of radiology reports.A transformer-based deidentification pipeline can achieve state-of-the-art performance for deidentifying radiology reports and other medical documents.}",
issn = {1527-974X},
doi = {10.1093/jamia/ocac219},
url = {https://doi.org/10.1093/jamia/ocac219},
note = {ocac219},
eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocac219/47220191/ocac219.pdf},
}
``` |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
widget:
- text: "PROCEDURE: Chest xray. COMPARISON: last seen on 1/1/2020 and also record dated of March 1st, 2019. FINDINGS: patchy airspace opacities. IMPRESSION: The results of the chest xray of January 1 2020 are the most concerning ones. The patient was transmitted to another service of UH Medical Center under the responsability of Dr. Perez. We used the system MedClinical data transmitter and sent the data on 2/1/2020, under the ID 5874233. We received the confirmation of Dr Perez. He is reachable at 567-493-1234."
- text: "Dr. Curt Langlotz chose to schedule a meeting on 06/23."
tags:
- token-classification
- sequence-tagger-model
- pytorch
- transformers
- pubmedbert
- uncased
- radiology
- biomedical
datasets:
- radreports
language:
- en
license: mit
---
Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. Manuscript in-proceedings.
Associated github repo: https://github.com/MIDRC/Stanford_Penn_Deidentifier
## Citation
```bibtex
@article{10.1093/jamia/ocac219,
author = {Chambon, Pierre J and Wu, Christopher and Steinkamp, Jackson M and Adleberg, Jason and Cook, Tessa S and Langlotz, Curtis P},
title = "{Automated deidentification of radiology reports combining transformer and โhide in plain sightโ rule-based methods}",
journal = {Journal of the American Medical Informatics Association},
year = {2022},
month = {11},
abstract = "{To develop an automated deidentification pipeline for radiology reports that detect protected health information (PHI) entities and replaces them with realistic surrogates โhiding in plain sight.โIn this retrospective study, 999 chest X-ray and CT reports collected between November 2019 and November 2020 were annotated for PHI at the token level and combined with 3001 X-rays and 2193 medical notes previously labeled, forming a large multi-institutional and cross-domain dataset of 6193 documents. Two radiology test sets, from a known and a new institution, as well as i2b2 2006 and 2014 test sets, served as an evaluation set to estimate model performance and to compare it with previously released deidentification tools. Several PHI detection models were developed based on different training datasets, fine-tuning approaches and data augmentation techniques, and a synthetic PHI generation algorithm. These models were compared using metrics such as precision, recall and F1 score, as well as paired samples Wilcoxon tests.Our best PHI detection model achieves 97.9 F1 score on radiology reports from a known institution, 99.6 from a new institution, 99.5 on i2b2 2006, and 98.9 on i2b2 2014. On reports from a known institution, it achieves 99.1 recall of detecting the core of each PHI span.Our model outperforms all deidentifiers it was compared to on all test sets as well as human labelers on i2b2 2014 data. It enables accurate and automatic deidentification of radiology reports.A transformer-based deidentification pipeline can achieve state-of-the-art performance for deidentifying radiology reports and other medical documents.}",
issn = {1527-974X},
doi = {10.1093/jamia/ocac219},
url = {https://doi.org/10.1093/jamia/ocac219},
note = {ocac219},
eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocac219/47220191/ocac219.pdf},
}
``` |
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/roberta-base-attribute-correction-mlm-titles-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/roberta-base-attribute-correction-mlm-titles-2
This model is a fine-tuned version of [ksabeh/roberta-base-attribute-correction-mlm](https://huggingface.co/ksabeh/roberta-base-attribute-correction-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0822
- Validation Loss: 0.0914
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23870, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2007 | 0.1023 | 0 |
| 0.0822 | 0.0914 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: gpl-2.0
language: ar
---
A model which is jointly trained and fine-tuned on Quran, Saheefa and nahj-al-balaqa. All Datasets are available [Here](https://github.com/language-ml/course-nlp-ir-1-text-exploring/tree/main/exploring-datasets/religious_text). Code will be available soon ...
Some Examples for filling the mask:
- ```
ุฐููููู [MASK] ููุง ุฑูููุจู ููููู ููุฏูู ููููู
ูุชููููููู
```
- ```
ููุง ุฃููููููุง ุงููููุงุณู ุงุนูุจูุฏููุง ุฑูุจููููู
ู ุงูููุฐูู ุฎูููููููู
ู ููุงูููุฐูููู ู
ููู ููุจูููููู
ู ููุนููููููู
ู [MASK]
```
This model is fine-tuned on [Bert Base Arabic](https://huggingface.co/asafaya/bert-base-arabic) for 30 epochs. We have used `Masked Language Modeling` to fine-tune the model. Also, after each 5 epochs, we have completely masked the words again for the model to learn the embeddings very well and not overfit the data.
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
---
Classifier of news affecting the stock price in the next 10 minutes |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.09 +/- 19.04
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: malaya-speech_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malaya-speech_Mrbrown_finetune1
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), get really bad fine-tuning result, that may mean the training/fine-tuning dataset must be high quality/at least several hours? Or maybe is because the learning rate is set too high(0.01) ? Still searching for the important factors.
It achieves the following results on the evaluation set:
- Loss: 3.8458
- Wer: 1.01
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| 0.3186 | 20.0 | 200 | 4.2225 | 1.13 |
| 0.4911 | 40.0 | 400 | 4.0427 | 0.99 |
| 0.9014 | 60.0 | 600 | 5.3285 | 1.04 |
| 1.0955 | 80.0 | 800 | 3.6922 | 1.02 |
| 0.7533 | 100.0 | 1000 | 3.8458 | 1.01 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- hf_diffuse
---
# Dummy diffusion model following architecture of https://github.com/lucidrains/denoising-diffusion-pytorch
Run the model as follows:
```python
from diffusers import UNetModel, GaussianDiffusion
import torch
# 1. Load model
unet = UNetModel.from_pretrained("fusing/ddpm_dummy")
# 2. Do one denoising step with model
batch_size, num_channels, height, width = 1, 3, 32, 32
dummy_noise = torch.ones((batch_size, num_channels, height, width))
time_step = torch.tensor([10])
image = unet(dummy_noise, time_step)
# 3. Load sampler
sampler = GaussianDiffusion.from_config("fusing/ddpm_dummy")
# 4. Sample image from sampler passing the model
image = sampler.sample(model, batch_size=1)
print(image)
``` |
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/bert-base-uncased-mlm-electronics-attribute-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert-base-uncased-mlm-electronics-attribute-correction
This model is a fine-tuned version of [ksabeh/bert-base-uncased-mlm-electronics](https://huggingface.co/ksabeh/bert-base-uncased-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0524
- Validation Loss: 0.0520
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1459 | 0.0678 | 0 |
| 0.0524 | 0.0520 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-finetuned-xsum-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-finetuned-xsum-v5
This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 263 | 0.4728 | 38.7072 | 38.5333 | 38.6391 | 38.6212 | 7.0513 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_only_classfn_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 602.00 +/- 193.99
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga i8pxgd2s -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga i8pxgd2s
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-06-09T09:56:35Z |
# Visual Semantic with BERT-CNN
This model can be used to assign an object-to-caption semantic relatedness score, which is valuable for (1) caption diverse re-ranking (this work),
and (2) (as an application)
generating soft labels for filtering out the related/non-related image-to-post when scraping images from the internet (e.g. Instagram).
To take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual (i.e., object, scene, etc) we use BERT as an embedding layer followed by a shallow CNN (tri-gram kernel) (Kim, 2014).
Please refer to [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information.
[](https://arxiv.org/abs/2301.08784) [](https://ahmed.jp/project_page/Dataset_2022/index.html)
For datasets that are less than 100K please have look at our [shallow model](https://github.com/ahmedssabir/Semantic-Relatedness-Based-Reranker-for-Text-Spotting)
The model is trained with a strict filter of 0.4 similarity distance thresholds between the object and its related caption.
For a quick start please have a look at this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
For the [dataset](https://huggingface.co/datasets/AhmedSSabir/Textual-Image-Caption-Dataset)
## # Result with SoTA pre-trained image Captioning BLIP
Comparison result with BLIP (125M pre-trained images) [Table 7 COCO Caption Karpathy testset](https://arxiv.org/pdf/2201.12086.pdf).
For the VilBERT model (3.5M pre-trained images) please refer to the paper.
## Accuarcy
| Model | B-1 | B-2 | B-3 | B-4 | M | R | C | S |BERTscore |
|----------------------------------|---------|-------|--------|-------|--------|--------|-------|--------|---------|
| BLIP Beam Search b=3 | .797 | .649 | **.514** | **.403** | **.311** | **.606** |**1.365** |**.243** | **.9484** |
| + BERT-CNN $th=0$ | .798 | .646 | .506 | .392 | .305 | .598 | 1.339 | .238 | .9473 |
| + BERT-CNN $th\geq0.2$ | .798 | .647 | .507 | .393 | .306 | .600 | 1.342 | .238 | .9473 |
| + BERT-CNN $th\geq0.3$ | .802 | .651 | .511 | .397 | .307 | .601 | 1.349 | .238 | .9479 |
| + BERT-CNN $th\geq0.4$ | **.806** | **.654** | .513 | .397 | .303 | .599 | 1.343 | .235 | .9476 |
## Diversity
| Model | Uniq | Voc | mBLeu-1โ | Div-1 |Div-2 | SBERT-sts|
|----------------------------------|---------|-------|----------|-------|-------|----------|
| BLIP Beam Search b=3 | **8.60** | 1406 | .461 | .68 | .80 | .8058 |
| + BERT-CNN $th=0$ | 8.49 | **1532** | .457 | .68 | .80 | .8046 |
| + BERT-CNN $th\geq0.2$ | 8.48 | 1486 | .458 | .68 | .80 | .8052 |
| + BERT-CNN $th\geq0.3$ | 8.41 | 1448 | .458 | .68 | .80 | **.8060** |
| + BERT-CNN $th\geq0.4$ | 8.30 | 1448 | **.455** | .68 | .80 | .8053 |
|human | 9.14 | 3425 | .375 | .74 | .84 | NA |
```
conda create -n BERT_visual python=3.6 anaconda
conda activate BERT_visual
pip install tensorflow==1.15.0
pip install --upgrade tensorflow_hub==0.7.0
```
```
git clone https://github.com/gaphex/bert_experimental/
```
```python
import tensorflow as tf
import numpy as np
import pandas as pd
import sys
from sklearn.model_selection import train_test_split
sys.path.insert(0, "bert_experimental")
from bert_experimental.finetuning.text_preprocessing import build_preprocessor
from bert_experimental.finetuning.graph_ops import load_graph
df = pd.read_csv("test.tsv", sep='\t')
texts = []
delimiter = " ||| "
for vis, cap in zip(df.visual.tolist(), df.caption.tolist()):
texts.append(delimiter.join((str(vis), str(cap))))
texts = np.array(texts)
trX, tsX = train_test_split(texts, shuffle=False, test_size=0.01)
restored_graph = load_graph("frozen_graph.pb")
graph_ops = restored_graph.get_operations()
input_op, output_op = graph_ops[0].name, graph_ops[-1].name
print(input_op, output_op)
x = restored_graph.get_tensor_by_name(input_op + ':0')
y = restored_graph.get_tensor_by_name(output_op + ':0')
preprocessor = build_preprocessor("vocab.txt", 64)
py_func = tf.numpy_function(preprocessor, [x], [tf.int32, tf.int32, tf.int32], name='preprocessor')
##predictions
sess = tf.Session(graph=restored_graph)
print(trX[:4])
y = tf.print(y, summarize=-1)
y_out = sess.run(y, feed_dict={
x: trX[:4].reshape((-1,1))
})
print(y_out)
````
For training and inference
```
python BERT_CNN.py --train train_0.4.tsv --epochs 5
```
```python
# -*- coding: utf-8 -*-
#!/bin/env python
import sys
import argparse
import re
import os
import sys
import json
import logging
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from BertLayer import BertLayer
from BertLayer import build_preprocessor
from freeze_keras_model import freeze_keras_model
from data_pre import *
from tensorflow import keras
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint
from sklearn.model_selection import train_test_split
if not 'bert_repo' in sys.path:
sys.path.insert(0, 'bert_repo')
from modeling import BertModel, BertConfig
from tokenization import FullTokenizer, convert_to_unicode
from extract_features import InputExample, convert_examples_to_features
# get TF logger
log = logging.getLogger('tensorflow')
log.handlers = []
parser=argparse.ArgumentParser()
parser.add_argument('--train', default='train.tsv', help='beam serach', type=str,required=False)
parser.add_argument('--num_bert_layer', default='12', help='truned layers', type=int,required=False)
parser.add_argument('--batch_size', default='128', help='truned layers', type=int,required=False)
parser.add_argument('--epochs', default='5', help='', type=int,required=False)
parser.add_argument('--seq_len', default='64', help='', type=int,required=False)
parser.add_argument('--CNN_kernel_size', default='3', help='', type=int,required=False)
parser.add_argument('--CNN_filters', default='32', help='', type=int,required=False)
args = parser.parse_args()
# Downlaod the pre-trained model
#!wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
#!unzip uncased_L-12_H-768_A-12.zip
# tf.Module
def build_module_fn(config_path, vocab_path, do_lower_case=True):
def bert_module_fn(is_training):
"""Spec function for a token embedding module."""
input_ids = tf.placeholder(shape=[None, None], dtype=tf.int32, name="input_ids")
input_mask = tf.placeholder(shape=[None, None], dtype=tf.int32, name="input_mask")
token_type = tf.placeholder(shape=[None, None], dtype=tf.int32, name="segment_ids")
config = BertConfig.from_json_file(config_path)
model = BertModel(config=config, is_training=is_training,
input_ids=input_ids, input_mask=input_mask, token_type_ids=token_type)
seq_output = model.all_encoder_layers[-1]
pool_output = model.get_pooled_output()
config_file = tf.constant(value=config_path, dtype=tf.string, name="config_file")
vocab_file = tf.constant(value=vocab_path, dtype=tf.string, name="vocab_file")
lower_case = tf.constant(do_lower_case)
tf.add_to_collection(tf.GraphKeys.ASSET_FILEPATHS, config_file)
tf.add_to_collection(tf.GraphKeys.ASSET_FILEPATHS, vocab_file)
input_map = {"input_ids": input_ids,
"input_mask": input_mask,
"segment_ids": token_type}
output_map = {"pooled_output": pool_output,
"sequence_output": seq_output}
output_info_map = {"vocab_file": vocab_file,
"do_lower_case": lower_case}
hub.add_signature(name="tokens", inputs=input_map, outputs=output_map)
hub.add_signature(name="tokenization_info", inputs={}, outputs=output_info_map)
return bert_module_fn
#MODEL_DIR = "uncased_L-12_H-768_A-12"
config_path = "/{}/bert_config.json".format(MODEL_DIR)
vocab_path = "/{}/vocab.txt".format(MODEL_DIR)
tags_and_args = []
for is_training in (True, False):
tags = set()
if is_training:
tags.add("train")
tags_and_args.append((tags, dict(is_training=is_training)))
module_fn = build_module_fn(config_path, vocab_path)
spec = hub.create_module_spec(module_fn, tags_and_args=tags_and_args)
spec.export("bert-module",
checkpoint_path="/{}/bert_model.ckpt".format(MODEL_DIR))
class BertLayer(tf.keras.layers.Layer):
def __init__(self, bert_path, seq_len=64, n_tune_layers=3,
pooling="cls", do_preprocessing=True, verbose=False,
tune_embeddings=False, trainable=True, **kwargs):
self.trainable = trainable
self.n_tune_layers = n_tune_layers
self.tune_embeddings = tune_embeddings
self.do_preprocessing = do_preprocessing
self.verbose = verbose
self.seq_len = seq_len
self.pooling = pooling
self.bert_path = bert_path
self.var_per_encoder = 16
if self.pooling not in ["cls", "mean", None]:
raise NameError(
f"Undefined pooling type (must be either 'cls', 'mean', or None, but is {self.pooling}"
)
super(BertLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.bert = hub.Module(self.build_abspath(self.bert_path),
trainable=self.trainable, name=f"{self.name}_module")
trainable_layers = []
if self.tune_embeddings:
trainable_layers.append("embeddings")
if self.pooling == "cls":
trainable_layers.append("pooler")
if self.n_tune_layers > 0:
encoder_var_names = [var.name for var in self.bert.variables if 'encoder' in var.name]
n_encoder_layers = int(len(encoder_var_names) / self.var_per_encoder)
for i in range(self.n_tune_layers):
trainable_layers.append(f"encoder/layer_{str(n_encoder_layers - 1 - i)}/")
# Add module variables to layer's trainable weights
for var in self.bert.variables:
if any([l in var.name for l in trainable_layers]):
self._trainable_weights.append(var)
else:
self._non_trainable_weights.append(var)
if self.verbose:
print("*** TRAINABLE VARS *** ")
for var in self._trainable_weights:
print(var)
self.build_preprocessor()
self.initialize_module()
super(BertLayer, self).build(input_shape)
def build_abspath(self, path):
if path.startswith("https://") or path.startswith("gs://"):
return path
else:
return os.path.abspath(path)
def build_preprocessor(self):
sess = tf.keras.backend.get_session()
tokenization_info = self.bert(signature="tokenization_info", as_dict=True)
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
self.preprocessor = build_preprocessor(vocab_file, self.seq_len, do_lower_case)
def initialize_module(self):
sess = tf.keras.backend.get_session()
vars_initialized = sess.run([tf.is_variable_initialized(var)
for var in self.bert.variables])
uninitialized = []
for var, is_initialized in zip(self.bert.variables, vars_initialized):
if not is_initialized:
uninitialized.append(var)
if len(uninitialized):
sess.run(tf.variables_initializer(uninitialized))
def call(self, input):
if self.do_preprocessing:
input = tf.numpy_function(self.preprocessor,
[input], [tf.int32, tf.int32, tf.int32],
name='preprocessor')
for feature in input:
feature.set_shape((None, self.seq_len))
input_ids, input_mask, segment_ids = input
bert_inputs = dict(
input_ids=input_ids, input_mask=input_mask, segment_ids=segment_ids
)
output = self.bert(inputs=bert_inputs, signature="tokens", as_dict=True)
if self.pooling == "cls":
pooled = output["pooled_output"]
else:
result = output["sequence_output"]
input_mask = tf.cast(input_mask, tf.float32)
mul_mask = lambda x, m: x * tf.expand_dims(m, axis=-1)
masked_reduce_mean = lambda x, m: tf.reduce_sum(mul_mask(x, m), axis=1) / (
tf.reduce_sum(m, axis=1, keepdims=True) + 1e-10)
if self.pooling == "mean":
pooled = masked_reduce_mean(result, input_mask)
else:
pooled = mul_mask(result, input_mask)
return pooled
def get_config(self):
config_dict = {
"bert_path": self.bert_path,
"seq_len": self.seq_len,
"pooling": self.pooling,
"n_tune_layers": self.n_tune_layers,
"tune_embeddings": self.tune_embeddings,
"do_preprocessing": self.do_preprocessing,
"verbose": self.verbose
}
super(BertLayer, self).get_config()
return config_dict
# read the train data
df = pd.read_csv(args.train, sep='\t')
labels = df.is_related.values
texts = []
delimiter = " ||| "
for vis, cap in zip(df.visual.tolist(), df.caption.tolist()):
texts.append(delimiter.join((str(vis), str(cap))))
texts = np.array(texts)
trX, tsX, trY, tsY = train_test_split(texts, labels, shuffle=True, test_size=0.2)
# Buliding the model
embedding_size = 768
# input
inp = tf.keras.Input(shape=(1,), dtype=tf.string)
# BERT encoder
# For CLS with linear layer
#encoder = BertLayer(bert_path="./bert-module/", seq_len=48, tune_embeddings=False,
# pooling='cls', n_tune_layers=3, verbose=False)
# CNN Layers
encoder = BertLayer(bert_path="./bert-module/", seq_len=args.seq_len, tune_embeddings=False, pooling=None, n_tune_layers=args.num_bert_layer, verbose=False)
cnn_out = tf.keras.layers.Conv1D(args.CNN_filters, args.CNN_kernel_size, padding='VALID', activation=tf.nn.relu)(encoder(inp))
pool = tf.keras.layers.MaxPooling1D(pool_size=2)(cnn_out)
flat = tf.keras.layers.Flatten()(pool)
pred = tf.keras.layers.Dense(1, activation="sigmoid")(flat)
model = tf.keras.models.Model(inputs=[inp], outputs=[pred])
model.summary()
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5, ),
loss="binary_crossentropy",
metrics=["accuracy"])
# fit the data
import logging
logging.getLogger("tensorflow").setLevel(logging.WARNING)
saver = keras.callbacks.ModelCheckpoint("bert_CNN_tuned.hdf5")
model.fit(trX, trY, validation_data=[tsX, tsY], batch_size=args.batch_size, epochs=args.epochs, callbacks=[saver])
#save the model
model.predict(trX[:10])
import json
json.dump(model.to_json(), open("model.json", "w"))
model = tf.keras.models.model_from_json(json.load(open("model.json")),
custom_objects={"BertLayer": BertLayer})
model.load_weights("bert_CNN_tuned.hdf5")
model.predict(trX[:10])
# For fast inference and less RAM usesage as post-processing we need to "freezing" the model.
from tensorflow.python.framework.graph_util import convert_variables_to_constants
from tensorflow.python.tools.optimize_for_inference_lib import optimize_for_inference
def freeze_keras_model(model, export_path=None, clear_devices=True):
sess = tf.keras.backend.get_session()
graph = sess.graph
with graph.as_default():
input_tensors = model.inputs
output_tensors = model.outputs
dtypes = [t.dtype.as_datatype_enum for t in input_tensors]
input_ops = [t.name.rsplit(":", maxsplit=1)[0] for t in input_tensors]
output_ops = [t.name.rsplit(":", maxsplit=1)[0] for t in output_tensors]
tmp_g = graph.as_graph_def()
if clear_devices:
for node in tmp_g.node:
node.device = ""
tmp_g = optimize_for_inference(
tmp_g, input_ops, output_ops, dtypes, False)
tmp_g = convert_variables_to_constants(sess, tmp_g, output_ops)
if export_path is not None:
with tf.gfile.GFile(export_path, "wb") as f:
f.write(tmp_g.SerializeToString())
return tmp_g
# freeze and save the model
frozen_graph = freeze_keras_model(model, export_path="frozen_graph.pb")
# inference
#!git clone https://github.com/gaphex/bert_experimental/
import tensorflow as tf
import numpy as np
import sys
sys.path.insert(0, "bert_experimental")
from bert_experimental.finetuning.text_preprocessing import build_preprocessor
from bert_experimental.finetuning.graph_ops import load_graph
restored_graph = load_graph("frozen_graph.pb")
graph_ops = restored_graph.get_operations()
input_op, output_op = graph_ops[0].name, graph_ops[-1].name
print(input_op, output_op)
x = restored_graph.get_tensor_by_name(input_op + ':0')
y = restored_graph.get_tensor_by_name(output_op + ':0')
preprocessor = build_preprocessor("vocab.txt", 64)
py_func = tf.numpy_function(preprocessor, [x], [tf.int32, tf.int32, tf.int32], name='preprocessor')
py_func = tf.numpy_function(preprocessor, [x], [tf.int32, tf.int32, tf.int32])
# predictions
sess = tf.Session(graph=restored_graph)
trX[:10]
y_out = sess.run(y, feed_dict={
x: trX[:10].reshape((-1,1))
})
print(y_out)
``` |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Kiwipirate/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/osanseviero/1654769951427/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1106315906165157889/0Hxb1ESL_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Omar Sanseviero</div>
<div style="text-align: center; font-size: 14px;">@osanseviero</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Omar Sanseviero.
| Data | Omar Sanseviero |
| --- | --- |
| Tweets downloaded | 3244 |
| Retweets | 1158 |
| Short tweets | 224 |
| Tweets kept | 1862 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/29bkab0t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @osanseviero's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1s35jikq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/osanseviero')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: wav2vec2-xls-r-300m_Mrbrown_finetune1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m_Mrbrown_finetune1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the uob_singlish dataset.
## This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab)
It achieves the following results on the evaluation set:
- Loss: 3.0927
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.7943 | 20.0 | 200 | 3.0597 | 1.0 |
| 2.9902 | 40.0 | 400 | 3.1604 | 1.0 |
| 2.9696 | 60.0 | 600 | 3.1112 | 1.0 |
| 2.8885 | 80.0 | 800 | 3.0234 | 1.0 |
| 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: keras
tags:
- SpeakerRecognition
- Fast Fourier Transform (FFT)
- Convnet
- speech-recordings
- SpeechClassification
---
## Model description
This model helps to classify speakers from the frequency domain representation of speech recordings, obtained via Fast Fourier Transform (FFT).
The model is created by a 1D convolutional network with residual connections for audio classification.
This repo contains the model for the notebook [**Speaker Recognition**](https://keras.io/examples/audio/speaker_recognition_using_cnn/).
Full credits go to [**Fadi Badine**](https://twitter.com/fadibadine)
## Dataset Used
This model uses a [**speaker recognition dataset**](https://www.kaggle.com/kongaevans/speaker-recognition-dataset) of Kaggle
## Intended uses & limitations
This should be run with `TensorFlow 2.3` or higher, or `tf-nightly`.
Also, The noise samples in the dataset need to be resampled to a sampling rate of 16000 Hz before using for this model so, In order to do this, you will need to have installed `ffmpg`.
## Training and evaluation data
During dataset preparation, the speech samples & background noise samples were sorted and categorized into 2 folders - audio & noise, and then noise samples were resampled to 16000Hz & then the background noise was added to the speech samples to augment the data. After that, the FFT of these samples was given to the model for the training & evaluation part.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | learning_rate | decay | beta_1 | beta_2 | epsilon | amsgrad | training_precision |
|----|-------------|-----|------|------|-------|-------|------------------|
|Adam|0.0010000000474974513|0.0|0.8999999761581421|0.9990000128746033|1e-07|False|float32|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
<center>
Model By : <a href="https://github.com/robotjellyzone">Kavya Bisht</a>
</center> |
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_V2_shuffleplus3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_V2_shuffleplus3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: zh
tags:
- summarization
inference: False
---
# Randeng-Pegasus-523M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/pegasus/pretrain_pegasus.sh)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/%E7%87%83%E7%81%AF%E7%B3%BB%E5%88%97/Randeng-Pegasus-523M-Chinese.html)
## ็ฎไป Brief Introduction
ๅไบๅค็ๆ่ฆไปปๅก็๏ผไธญๆ็็PAGASUS-largeใ
Good at solving text summarization tasks, Chinese PAGASUS-large.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ่ฝฌๆข NLT | ็็ฏ Randeng | PEFASUS | 523M | ไธญๆ Chinese |
## ๆจกๅไฟกๆฏ Model Information
ๅ่่ฎบๆ๏ผ[PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf)
ไธบไบ่งฃๅณไธญๆ็่ชๅจๆ่ฆไปปๅก๏ผๆไปฌ้ตๅพชPEGASUS็่ฎพ่ฎกๆฅ่ฎญ็ปไธญๆ็็ๆฌใๆไปฌไฝฟ็จไบๆ้่ฏญๆๅบ(180G็ๆฌ)ไฝไธบ้ข่ฎญ็ปๆฐๆฎ้ใๆญคๅค๏ผ่่ๅฐไธญๆsentence pieceไธ็จณๅฎ๏ผๆไปฌๅจRandeng-PEGASUSไธญๅๆถไฝฟ็จไบ็ปๅทดๅ่ฏๅBERTๅ่ฏๅจใๆไปฌไนๆไพbase็็ๆฌ๏ผ[IDEA-CCNL/Randeng-Pegasus-238M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Chinese)ใไปฅๅ๏ผๆไปฌไนๆไพไบๅจไธญๆๆ่ฆๆฐๆฎ้ไธๅพฎ่ฐ็็ๆฌ๏ผ[Randeng-Pegasus-523M-Summary-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese)ใ
To solve Chinese abstractive summarization tasks, we follow the PEGASUS guidelines. We employ a version of WuDao Corpora (180 GB version) as a pre-training dataset. In addition, considering that the Chinese sentence chunk is unstable, we utilize jieba and BERT tokenizer in our Randeng-PEGASUS. We also provide a base size version, available with [IDEA-CCNL/Randeng-Pegasus-238M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Chinese). And, we also provide a version after fine-tuning on Chinese text summarization datasets: [Randeng-Pegasus-523M-Summary-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Summary-Chinese).
## ไฝฟ็จ Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_523M/tree/main
# Strongly recommend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-523M-Chinese")
text = "ๆฎๅพฎไฟกๅ
ฌไผๅทโ็้ขโๆฅ้๏ผ4ๆฅไธๅ10็นๅทฆๅณ๏ผไธญๅฝๅๆนๅงๅๅๆญ่ฐๆฅๅฐ็ป็ชๅปๆฅ่ฎฟๅฅ้ฉฐไธๆตทๅไบๅค๏ผ่ฐๅๆฐๆฎๆๆ๏ผๅนถๅฏนๅคๅๅฅ้ฉฐ้ซ็ฎก่ฟ่กไบ็บฆ่ฐใๆชๆญขๆจๆฅๆ9็น๏ผๅ
ๆฌๅไบฌๆข
่ตๅพทๆฏ-ๅฅ้ฉฐ้ๅฎๆๅกๆ้ๅ
ฌๅธไธๅบๆป็ป็ๅจๅ
็ๅคๅ็ฎก็ไบบๅไป็ๅจไธๆตทๅๅ
ฌๅฎคๅ
"
inputs = tokenizer(text, max_length=1024, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model Output: ๆชๆญขๆจๆฅๆ9็น๏ผๅ
ๆฌๅไบฌๆข
่ตๅพทๆฏ-ๅฅ้ฉฐ้ๅฎๆๅกๆ้ๅ
ฌๅธไธๅบๆป็ป็ๅจๅ
็ๅคๅ็ฎก็ไบบๅไป็ๅจไธๆตทๅๅ
ฌๅฎคๅ
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: zh
tags:
- summarization
- chinese
inference: False
---
# Randeng-Pegasus-238M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/pegasus/pretrain_pegasus.sh)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/zh/latest/docs/%E7%87%83%E7%81%AF%E7%B3%BB%E5%88%97/Randeng-Pegasus-238M-Chinese.html)
## ็ฎไป Brief Introduction
ๅไบๅค็ๆ่ฆไปปๅก็๏ผไธญๆ็็PAGASUS-baseใ
Good at solving text summarization tasks, Chinese PAGASUS-base.
## ๆจกๅๅ็ฑป Model Taxonomy
| ้ๆฑ Demand | ไปปๅก Task | ็ณปๅ Series | ๆจกๅ Model | ๅๆฐ Parameter | ้ขๅค Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| ้็จ General | ่ช็ถ่ฏญ่จ่ฝฌๆข NLT | ็็ฏ Randeng | PEFASUS | 238M | ไธญๆ-Chinese |
## ๆจกๅไฟกๆฏ Model Information
ๅ่่ฎบๆ๏ผ[PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf)
ไธบไบ่งฃๅณไธญๆ็่ชๅจๆ่ฆไปปๅก๏ผๆไปฌ้ตๅพชPEGASUS็่ฎพ่ฎกๆฅ่ฎญ็ปไธญๆ็็ๆฌใๆไปฌไฝฟ็จไบๆ้่ฏญๆๅบ(180G็ๆฌ)ไฝไธบ้ข่ฎญ็ปๆฐๆฎ้ใๆญคๅค๏ผ่่ๅฐไธญๆsentence pieceไธ็จณๅฎ๏ผๆไปฌๅจRandeng-PEGASUSไธญๅๆถไฝฟ็จไบ็ปๅทดๅ่ฏๅBERTๅ่ฏๅจใๆไปฌไนๆไพlarge็็ๆฌ๏ผ[IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese)ใไปฅๅ๏ผๆไปฌไนๆไพไบๅจไธญๆๆ่ฆๆฐๆฎ้ไธๅพฎ่ฐ็็ๆฌ๏ผ[Randeng-Pegasus-238M-Summary-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese)ใ
To solve Chinese abstractive summarization tasks, we follow the PEGASUS guidelines. We employ a version of WuDao Corpora (180 GB version) as a pre-training dataset. In addition, considering that the Chinese sentence chunk is unstable, we utilize jiebaand BERT tokenizer in our Randeng-PEGASUS. We also provide a large size version, available with [IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese). And, we also provide a version after fine-tuning on Chinese text summarization datasets: [Randeng-Pegasus-238M-Summary-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese).
## ไฝฟ็จ Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_238M/tree/main
# Stronly recomend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Chinese")
text = "ๆฎๅพฎไฟกๅ
ฌไผๅทโ็้ขโๆฅ้๏ผ4ๆฅไธๅ10็นๅทฆๅณ๏ผไธญๅฝๅๆนๅงๅๅๆญ่ฐๆฅๅฐ็ป็ชๅปๆฅ่ฎฟๅฅ้ฉฐไธๆตทๅไบๅค๏ผ่ฐๅๆฐๆฎๆๆ๏ผๅนถๅฏนๅคๅๅฅ้ฉฐ้ซ็ฎก่ฟ่กไบ็บฆ่ฐใๆชๆญขๆจๆฅๆ9็น๏ผๅ
ๆฌๅไบฌๆข
่ตๅพทๆฏ-ๅฅ้ฉฐ้ๅฎๆๅกๆ้ๅ
ฌๅธไธๅบๆป็ป็ๅจๅ
็ๅคๅ็ฎก็ไบบๅไป็ๅจไธๆตทๅๅ
ฌๅฎคๅ
"
inputs = tokenizer(text, max_length=512, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model output: ๆชๆญขๆจๆฅๆ9็น๏ผๅ
ๆฌๅไบฌๆข
่ตๅพทๆฏ-ๅฅ้ฉฐ้ๅฎๆๅกๆ้ๅ
ฌๅธไธๅบๆป็ป็ๅจๅ
็ๅคๅ็ฎก็ไบบๅไป็ๅจไธๆตทๅๅ
ฌๅฎคๅ
```
## ๅผ็จ Citation
ๅฆๆๆจๅจๆจ็ๅทฅไฝไธญไฝฟ็จไบๆไปฌ็ๆจกๅ๏ผๅฏไปฅๅผ็จๆไปฌ็[่ฎบๆ](https://arxiv.org/abs/2209.02970)๏ผ
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
ไนๅฏไปฅๅผ็จๆไปฌ็[็ฝ็ซ](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="RalphX1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AnonymousSub/specter-bert-model | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- qualitydatalab/autotrain-data-car-review-project
co2_eq_emissions: 0.061185706621337065
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 966432120
- CO2 Emissions (in grams): 0.061185706621337065
## Validation Metrics
- Loss: 0.6066656112670898
- Accuracy: 0.724822695035461
- Macro F1: 0.7077087000886584
- Micro F1: 0.7248226950354609
- Weighted F1: 0.7077087000886584
- Macro Precision: 0.7143184427227084
- Micro Precision: 0.724822695035461
- Weighted Precision: 0.7143184427227083
- Macro Recall: 0.7248226950354609
- Micro Recall: 0.724822695035461
- Weighted Recall: 0.724822695035461
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/qualitydatalab/autotrain-car-review-project-966432120
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("qualitydatalab/autotrain-car-review-project-966432120", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Anorak/nirvana | [
"pytorch",
"pegasus",
"text2text-generation",
"unk",
"dataset:Anorak/autonlp-data-Niravana-test2",
"transformers",
"autonlp",
"co2_eq_emissions",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-06-09T13:23:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 172.04 +/- 90.74
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnthonyNelson/DialoGPT-small-ricksanchez | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2022-06-09T13:26:40Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="i8pxgd2s/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ArJakusz/DialoGPT-small-stark | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-06-09T15:02:08Z | ---
tags:
- DNA
license: mit
---
## MiniDNA model
This is a distilled version of [DNABERT](https://github.com/jerryji1993/DNABERT) by using MiniLM technique. It has a BERT architecture with 6 layers and 768 hidden units, pre-trained on 6-mer DNA sequences. For more details on the pre-training scheme and methods, please check the original [thesis report](http://www.diva-portal.org/smash/record.jsf?dswid=846&pid=diva2%3A1676068&c=1&searchType=SIMPLE&language=en&query=joana+palรฉs&af=%5B%5D&aq=%5B%5B%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all)..
## How to Use
The model can be used to fine-tune on a downstream genomic task, e.g. promoter identification.
```python
import torch
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('Peltarion/dnabert-minilm')
```
More details on how to fine-tune the model, dataset and additional source codes are available on [github.com/joanaapa/Distillation-DNABERT-Promoter](https://github.com/joanaapa/Distillation-DNABERT-Promoter). |
ArJakusz/DialoGPT-small-starky | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: keras
tags:
- computer-vision
- generative
- variational-autoencoder
- vq-vae
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training Metrics
Model history needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
Archie/myProject | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-09T16:11:52Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/994592419705274369/RLplF55e_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MrBeast</div>
<div style="text-align: center; font-size: 14px;">@mrbeast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MrBeast.
| Data | MrBeast |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 86 |
| Short tweets | 729 |
| Tweets kept | 2433 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/5cv62k60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mrbeast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bfqzlltq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bfqzlltq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mrbeast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Arnold/wav2vec2-hausa-demo-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-09T17:06:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1612
- F1: 0.8618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2874 | 1.0 | 715 | 0.1764 | 0.8343 |
| 0.1475 | 2.0 | 1430 | 0.1561 | 0.8508 |
| 0.0936 | 3.0 | 2145 | 0.1612 | 0.8618 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AshtonBenson/DialoGPT-small-quentin-coldwater | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-09T18:33:18Z | ---
language: en
thumbnail: http://www.huggingtweets.com/midudev/1654800505422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1526668354609680384/r85fytOs_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">๐ด EN DIRECTO twitch.tv/midudev</div>
<div style="text-align: center; font-size: 14px;">@midudev</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ๐ด EN DIRECTO twitch.tv/midudev.
| Data | ๐ด EN DIRECTO twitch.tv/midudev |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 824 |
| Short tweets | 163 |
| Tweets kept | 2259 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11iwoc6b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @midudev's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/s48ktc1m/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/midudev')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Augustvember/WokkaBot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-09T20:06:56Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: CAP_coded_US_Congressional_bills
results: []
widget:
- text: "A bill to prohibt discrimination in employment because of race, color, religion, national origin, or ancestry"
example_title: "example 1"
- text: "A bill to require the promulgation of regulations to improve aviation safety in adverse weather conditions, and for other purposes."
example_title: "example 2"
---
This model predicts the issue category of US Congressional bills.
The model is trained on ~250k US Congressional bills from 1950-2015.
The issue coding scheme follows the Comparative Agenda Project: https://www.comparativeagendas.net/pages/master-codebook
The model is cased (case sensitive)
Any questions on the model and training data feel free to message me on twitter - @sachary_
Train Loss: 0.1318;
Train Sparse Categorical Accuracy: 0.9268;
Validation Loss: 0.2439;
Validation Sparse Categorical Accuracy: 0.9161
The following hyperparameters were used during training:
optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
training_precision: float32
### Training hyperparameters
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Tokenizers 0.12.1
|
Augustvember/WokkaBot99 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
model-index:
- name: ner_marathi_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_marathi_bert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3606
- Overall Precision: 0.8939
- Overall Recall: 0.9030
- Overall F1: 0.8984
- Overall Accuracy: 0.9347
- Loc F1: 0.8823
- Org F1: 0.8555
- Per F1: 0.9435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2961 | 3.19 | 1000 | 0.3496 | 0.8720 | 0.8841 | 0.8780 | 0.9229 | 0.8599 | 0.8210 | 0.9343 |
| 0.0613 | 6.39 | 2000 | 0.3606 | 0.8939 | 0.9030 | 0.8984 | 0.9347 | 0.8823 | 0.8555 | 0.9435 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AvatarXD/DialoGPT-medium-Blitzo | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pm390 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pm390
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('max_grad_norm', 6),
('n_timesteps', 100000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Axon/resnet18-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- diffusion
license: mit
---
Latent Diffusion
**Paper**: [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
**Abstract**:
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at this https URL.
## Usage
```python
from diffusers import DiffusionPipeline
ldm = DiffusionPipeline.from_pretrained("fusing/latent-diffusion-text2im-large")
generator = torch.manual_seed(42)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], generator=generator, eta=0.3, guidance_scale=6.0, num_inference_steps=50)
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = image_processed * 255.
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. "A street sign that reads Huggingface."

2."A painting of a squirrel eating a burger"
 |
Axon/resnet50-v1 | [
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Paragraph_Tokenized_mBERT_cased_fine_tuned
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0537
- Precision: 0.8585
- Recall: 0.7101
- F1: 0.7773
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0693 | 1.0 | 514 | 0.0416 | 0.9485 | 0.6492 | 0.7708 | 0.9884 |
| 0.0367 | 2.0 | 1028 | 0.0396 | 0.9391 | 0.6710 | 0.7827 | 0.9892 |
| 0.0283 | 3.0 | 1542 | 0.0385 | 0.9388 | 0.6889 | 0.7947 | 0.9899 |
| 0.0222 | 4.0 | 2056 | 0.0422 | 0.9456 | 0.6790 | 0.7904 | 0.9898 |
| 0.0182 | 5.0 | 2570 | 0.0457 | 0.9349 | 0.6925 | 0.7956 | 0.9901 |
| 0.013 | 6.0 | 3084 | 0.0484 | 0.8947 | 0.7062 | 0.7894 | 0.9899 |
| 0.0084 | 7.0 | 3598 | 0.0537 | 0.8585 | 0.7101 | 0.7773 | 0.9893 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Ayato/DialoGTP-large-Yuri | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Omar Dialog GPT Model Medium 10
# Trained on discord channels:
# half of Dragalia chat |
Ayham/albert_gpt2_Full_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9313235272564213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1595
- Accuracy: 0.931
- F1: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.1873 | 0.924 | 0.9234 |
| 0.1992 | 2.0 | 250 | 0.1649 | 0.929 | 0.9293 |
| 0.1992 | 3.0 | 375 | 0.1595 | 0.931 | 0.9313 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
Ayham/bert_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-06-10T00:30:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-finetuned-filtered-0609
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-filtered-0609
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2410
- Accuracy: 0.9748
- Precision: 0.9751
- Recall: 0.9748
- F1: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2028 | 1.0 | 3180 | 0.2405 | 0.9535 | 0.9561 | 0.9535 | 0.9538 |
| 0.1632 | 2.0 | 6360 | 0.1686 | 0.9660 | 0.9664 | 0.9660 | 0.9661 |
| 0.1203 | 3.0 | 9540 | 0.1625 | 0.9648 | 0.9655 | 0.9648 | 0.9648 |
| 0.1233 | 4.0 | 12720 | 0.1510 | 0.9698 | 0.9702 | 0.9698 | 0.9699 |
| 0.0823 | 5.0 | 15900 | 0.1600 | 0.9730 | 0.9732 | 0.9730 | 0.9730 |
| 0.0453 | 6.0 | 19080 | 0.1953 | 0.9723 | 0.9724 | 0.9723 | 0.9723 |
| 0.031 | 7.0 | 22260 | 0.1754 | 0.9755 | 0.9755 | 0.9755 | 0.9755 |
| 0.0166 | 8.0 | 25440 | 0.2155 | 0.9739 | 0.9740 | 0.9739 | 0.9739 |
| 0.0036 | 9.0 | 28620 | 0.2519 | 0.9730 | 0.9733 | 0.9730 | 0.9730 |
| 0.0035 | 10.0 | 31800 | 0.2410 | 0.9748 | 0.9751 | 0.9748 | 0.9749 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.9.1+cu111
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Ayham/roberta_bert_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit_test_1_95
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9501661062240601
---
# vit_test_1_95
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
Ayham/roberta_gpt2_new_max64_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | Task:
Given a set of input keywords, generate a corresponding text output for a section in the legal domain.
Dataset:
We used the Contract Understanding Atticus Dataset (CUAD).
It is a corpus of 13,000+ labels in 510 commercial legal contracts.
They have been manually labeled under the supervision of experienced lawyers to identify 41 types of legal clauses (e.g. licenses, warranty, governing law, insurance, etcโฆ).
Workflow:

You can connect me at [email protected] |
Ayham/roberta_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3806
- Margin Accuracy: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.6482 | 0.16 | 20 | 0.6494 | 0.7263 |
| 0.5151 | 0.32 | 40 | 0.5088 | 0.7792 |
| 0.4822 | 0.48 | 60 | 0.4429 | 0.8045 |
| 0.4472 | 0.64 | 80 | 0.4265 | 0.8107 |
| 0.4352 | 0.8 | 100 | 0.4155 | 0.8132 |
| 0.4335 | 0.96 | 120 | 0.4128 | 0.8116 |
| 0.4113 | 1.12 | 140 | 0.4119 | 0.8142 |
| 0.4186 | 1.28 | 160 | 0.4075 | 0.8120 |
| 0.42 | 1.44 | 180 | 0.4072 | 0.8123 |
| 0.4175 | 1.6 | 200 | 0.4080 | 0.8130 |
| 0.4097 | 1.76 | 220 | 0.4031 | 0.8128 |
| 0.397 | 1.92 | 240 | 0.4004 | 0.8130 |
| 0.4115 | 2.08 | 260 | 0.3979 | 0.8136 |
| 0.4108 | 2.24 | 280 | 0.3940 | 0.8167 |
| 0.4125 | 2.4 | 300 | 0.3879 | 0.8218 |
| 0.4117 | 2.56 | 320 | 0.3848 | 0.8217 |
| 0.3967 | 2.72 | 340 | 0.3818 | 0.8231 |
| 0.3947 | 2.88 | 360 | 0.3813 | 0.8240 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
Ayou/chinese_mobile_bert | [
"pytorch",
"mobilebert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"MobileBertForMaskedLM"
],
"model_type": "mobilebert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="thenewcompany/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Azaghast/DistilBART-SCP-ParaSummarization | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"BartForConditionalGeneration"
],
"model_type": "bart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: zh
pipeline_tag: fill-mask
widget:
- text: "ๆ นๆฎๆฐ้ปๆฅ้๏ผไธๅคง[MASK]ๆฐๅๅ้ไฝๆถจ่ถ
1๏ผ
ใ"
- text: "็จๅ็ง้ๅพๆฏๆไธญๅฐ[MASK]ไผไธ่่ตใ"
tags:
- bert
license: apache-2.0
---
## Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model) for the financial domain
For Chinese natural language processing in specific domains, we provide **Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model)** for the financial domain named **pai-dkplm-financial-base-zh**, from our AAAI 2021 paper named **DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding**.
This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP ) developed by the Alibaba PAI team.
## Citation
If you find the resource is useful, please cite the following papers in your work.
- For the EasyNLP framework:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
```
- For DKPLM:
```
@article{dkplm,
title = {DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding},
author = {Zhang, Taolin and Wang, Chengyu and Hu, Nan and Qiu, Minghui and Tang, Chengguang and He, Xiaofeng and Huang, Jun},
url = {https://arxiv.org/abs/2112.01047},
publisher = {arXiv},
year = {2021}
}
``` |
Azizun/Geotrend-10-epochs | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: MiniLM-L12-H384-uncased-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.875
- name: F1
type: f1
value: 0.9097345132743363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-L12-H384-uncased-mrpc
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4319
- Accuracy: 0.875
- F1: 0.9097
- Combined Score: 0.8924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Azura/data | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Backedman/DialoGPT-small-Anika | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-2ndfinetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-2ndfinetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_best_finetuned_emotion_experiment_augmented_anger_fear) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3531
- Accuracy: 0.9054
- F1: 0.9034
- Precision: 0.9074
- Recall: 0.9054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4716 | 1.0 | 11 | 0.2851 | 0.8986 | 0.8945 | 0.9029 | 0.8986 |
| 0.2842 | 2.0 | 22 | 0.3041 | 0.8851 | 0.8796 | 0.8816 | 0.8851 |
| 0.167 | 3.0 | 33 | 0.2996 | 0.8986 | 0.8914 | 0.8997 | 0.8986 |
| 0.1527 | 4.0 | 44 | 0.2443 | 0.9189 | 0.9163 | 0.9222 | 0.9189 |
| 0.0926 | 5.0 | 55 | 0.2777 | 0.9054 | 0.9016 | 0.9059 | 0.9054 |
| 0.0897 | 6.0 | 66 | 0.3081 | 0.9122 | 0.9080 | 0.9147 | 0.9122 |
| 0.0438 | 7.0 | 77 | 0.3332 | 0.8986 | 0.8952 | 0.8993 | 0.8986 |
| 0.0433 | 8.0 | 88 | 0.3480 | 0.8851 | 0.8859 | 0.8896 | 0.8851 |
| 0.0398 | 9.0 | 99 | 0.3531 | 0.9054 | 0.9034 | 0.9074 | 0.9054 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Battlehooks/distilbert-base-uncased-finetuned-squad | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-10T12:19:45Z | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 380.12 +/- 81.26
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
---
# **A2C** Agent playing **Humanoid-v3**
This is a trained model of a **A2C** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env Humanoid-v3 -orga sb3 -f logs/
python enjoy.py --algo a2c --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo a2c --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env Humanoid-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
BatuhanYilmaz/bert-finetuned-mrpc | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# House MD DialoGPT Model |
BatuhanYilmaz/bert-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-10T12:24:12Z | ---
language: en
datasets:
- ccdv/pubmed-summarization
license: apache-2.0
---
## Introduction
[Google's LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) introduced as an extension of a successful [T5 model](https://arxiv.org/pdf/1910.10683.pdf).
This is an unofficial *longt5-large-16384-pubmed-3k_steps* checkpoint. I.e., this is a large configuration of the LongT5 model with a `transient-global` attention fine-tuned on [pubmed summarization dataset](https://huggingface.co/datasets/ccdv/pubmed-summarization) for 3,000 training steps. It may be worth continuing in the fine-tuning as we did not train the model until the convergence.
## Results and Fine-tuning Details
The fine-tuned model achieves the following results on the evaluation set using `beam_search=3` and without any specific calibration of generation parameters are presented below, altogether with the results from the original paper (the original scores are higher, very likely due to a higher number of training steps).
| Metric | Score | Score (original paper)
| --- | --- | --- |
| Rouge-1 | 47.44 | 49.98 |
| Rouge-2 | 22.68 | 24.69 |
| Rouge-L | 29.83 | x |
| Rouge-Lsum | 43.13 | 46.46 |
The training parameters follow the ones specified in the paper. We accumulated batch size to 128 examples and used `Adafactor` optimizer with a constant learning rate `0.001`. The full training hyper-parameters and logs can be found via the following [W&B run](https://wandb.ai/stancld/LongT5/runs/1lwncl8a?workspace=user-stancld). The model was trained using the [HuggingFace's trainer](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer_seq2seq.py).
The only specific adjustment, I made for the training, was dropping very short input articles (less than 16 words (a bit of mistake, should be less than 16 tokens)) as this sequences do not contribute to gradient creation in the *transient-global* attention, which resulted in training crashes when DDP used.
## Usage
```python
LONG_ARTICLE = """"anxiety affects quality of life in those living
with parkinson 's disease ( pd ) more so than
overall cognitive status , motor deficits , apathy
, and depression [ 13 ] . although anxiety and
depression are often related and coexist in pd
patients , recent research suggests that anxiety
rather than depression is the most prominent and
prevalent mood disorder in pd [ 5 , 6 ] . yet ,
our current understanding of anxiety and its
impact on cognition in pd , as well as its neural
basis and best treatment practices , remains
meager and lags far behind that of depression .
overall , neuropsychiatric symptoms in pd have
been shown to be negatively associated with
cognitive performance . for example , higher
depression scores have been correlated with lower
scores on the mini - mental state exam ( mmse ) [
8 , 9 ] as well as tests of memory and executive
functions ( e.g. , attention ) [ 1014 ] . likewise
, apathy and anhedonia in pd patients have been
associated with executive dysfunction [ 10 , 1523
] . however , few studies have specifically
investigated the relationship between anxiety and
cognition in pd . one study showed a strong
negative relationship between anxiety ( both state
and trait ) and overall cognitive performance (
measured by the total of the repeatable battery
for the assessment of neuropsychological status
index ) within a sample of 27 pd patients .
furthermore , trait anxiety was negatively
associated with each of the cognitive domains
assessed by the rbans ( i.e. , immediate memory ,
visuospatial construction , language , attention ,
and delayed memory ) . two further studies have
examined whether anxiety differentially affects
cognition in patients with left - sided dominant
pd ( lpd ) versus right - sided dominant pd ( rpd
) ; however , their findings were inconsistent .
the first study found that working memory
performance was worse in lpd patients with anxiety
compared to rpd patients with anxiety , whereas
the second study reported that , in lpd , apathy
but not anxiety was associated with performance on
nonverbally mediated executive functions and
visuospatial tasks ( e.g. , tmt - b , wms - iii
spatial span ) , while in rpd , anxiety but not
apathy significantly correlated with performance
on verbally mediated tasks ( e.g. , clock reading
test and boston naming test ) . furthermore ,
anxiety was significantly correlated with
neuropsychological measures of attention and
executive and visuospatial functions . taken
together , it is evident that there are limited
and inconsistent findings describing the
relationship between anxiety and cognition in pd
and more specifically how anxiety might influence
particular domains of cognition such as attention
and memory and executive functioning . it is also
striking that , to date , no study has examined
the influence of anxiety on cognition in pd by
directly comparing groups of pd patients with and
without anxiety while excluding depression . given
that research on healthy young adults suggests
that anxiety reduces processing capacity and
impairs processing efficiency , especially in the
central executive and attentional systems of
working memory [ 26 , 27 ] , we hypothesized that
pd patients with anxiety would show impairments in
attentional set - shifting and working memory
compared to pd patients without anxiety .
furthermore , since previous work , albeit limited
, has focused on the influence of symptom
laterality on anxiety and cognition , we also
explored this relationship . seventeen pd patients
with anxiety and thirty - three pd patients
without anxiety were included in this study ( see
table 1 ) . the cross - sectional data from these
participants was taken from a patient database
that has been compiled over the past 8 years (
since 2008 ) at the parkinson 's disease research
clinic at the brain and mind centre , university
of sydney . inclusion criteria involved a
diagnosis of idiopathic pd according to the united
kingdom parkinson 's disease society brain bank
criteria and were confirmed by a neurologist (
sjgl ) . patients also had to have an adequate
proficiency in english and have completed a full
neuropsychological assessment . ten patients in
this study ( 5 pd with anxiety ; 5 pd without
anxiety ) were taking psychotropic drugs ( i.e. ,
benzodiazepine or selective serotonin reuptake
inhibitor ) . patients were also excluded if they
had other neurological disorders , psychiatric
disorders other than affective disorders ( such as
anxiety ) , or if they reported a score greater
than six on the depression subscale of the
hospital anxiety and depression scale ( hads ) .
thus , all participants who scored within a
depressed ( hads - d > 6 ) range were excluded
from this study , in attempt to examine a refined
sample of pd patients with and without anxiety in
order to determine the independent effect of
anxiety on cognition . this research was approved
by the human research ethics committee of the
university of sydney , and written informed
consent was obtained from all participants . self
- reported hads was used to assess anxiety in pd
and has been previously shown to be a useful
measure of clinical anxiety in pd . a cut - off
score of > 8 on the anxiety subscale of the hads (
hads - a ) was used to identify pd cases with
anxiety ( pda+ ) , while a cut - off score of < 6
on the hads - a was used to identify pd cases
without anxiety ( pda ) . this criterion was more
stringent than usual ( > 7 cut - off score ) , in
effort to create distinct patient groups . the
neurological evaluation rated participants
according to hoehn and yahr ( h&y ) stages and
assessed their motor symptoms using part iii of
the revised mds task force unified parkinson 's
disease rating scale ( updrs ) . in a similar way
this was determined by calculating a total left
and right score from rigidity items 3035 ,
voluntary movement items 3643 , and tremor items
5057 from the mds - updrs part iii ( see table 1 )
. processing speed was assessed using the trail
making test , part a ( tmt - a , z - score ) .
attentional set - shifting was measured using the
trail making test , part b ( tmt - b , z - score )
. working memory was assessed using the digit span
forward and backward subtest of the wechsler
memory scale - iii ( raw scores ) . language was
assessed with semantic and phonemic verbal fluency
via the controlled oral word associated test (
cowat animals and letters , z - score ) . the
ability to retain learned verbal memory was
assessed using the logical memory subtest from the
wechsler memory scale - iii ( lm - i z - score ,
lm - ii z - score , % lm retention z - score ) .
the mini - mental state examination ( mmse )
demographic , clinical , and neuropsychological
variables were compared between the two groups
with the independent t - test or mann whitney u
test , depending on whether the variable met
parametric assumptions . chi - square tests were
used to examine gender and symptom laterality
differences between groups . all analyses employed
an alpha level of p < 0.05 and were two - tailed .
spearman correlations were performed separately in
each group to examine associations between anxiety
and/or depression ratings and cognitive functions
. as expected , the pda+ group reported
significant greater levels of anxiety on the hads
- a ( u = 0 , p < 0.001 ) and higher total score
on the hads ( u = 1 , p < 0.001 ) compared to the
pda group ( table 1 ) . groups were matched in age
( t(48 ) = 1.31 , p = 0.20 ) , disease duration (
u = 259 , p = 0.66 ) , updrs - iii score ( u =
250.5 , p = 0.65 ) , h&y ( u = 245 , p = 0.43 ) ,
ledd ( u = 159.5 , p = 0.80 ) , and depression (
hads - d ) ( u = 190.5 , p = 0.06 ) . additionally
, all groups were matched in the distribution of
gender ( = 0.098 , p = 0.75 ) and side - affected
( = 0.765 , p = 0.38 ) . there were no group
differences for tmt - a performance ( u = 256 , p
= 0.62 ) ( table 2 ) ; however , the pda+ group
had worse performance on the trail making test
part b ( t(46 ) = 2.03 , p = 0.048 ) compared to
the pda group ( figure 1 ) . the pda+ group also
demonstrated significantly worse performance on
the digit span forward subtest ( t(48 ) = 2.22 , p
= 0.031 ) and backward subtest ( u = 190.5 , p =
0.016 ) compared to the pda group ( figures 2(a )
and 2(b ) ) . neither semantic verbal fluency (
t(47 ) = 0.70 , p = 0.49 ) nor phonemic verbal
fluency ( t(47 ) = 0.39 , p = 0.70 ) differed
between groups . logical memory i immediate recall
test ( u = 176 , p = 0.059 ) showed a trend that
the pda+ group had worse new verbal learning and
immediate recall abilities than the pda group .
however , logical memory ii test performance ( u =
219 , p = 0.204 ) and logical memory % retention (
u = 242.5 , p = 0.434 ) did not differ between
groups . there were also no differences between
groups in global cognition ( mmse ) ( u = 222.5 ,
p = 0.23 ) . participants were split into lpd and
rpd , and then further group differences were
examined between pda+ and pda. importantly , the
groups remained matched in age , disease duration
, updrs - iii , dde , h&y stage , and depression
but remained significantly different on self -
reported anxiety . lpda+ demonstrated worse
performance on the digit span forward test ( t(19
) = 2.29 , p = 0.033 ) compared to lpda , whereas
rpda+ demonstrated worse performance on the digit
span backward test ( u = 36.5 , p = 0.006 ) , lm -
i immediate recall ( u = 37.5 , p = 0.008 ) , and
lm - ii ( u = 45.0 , p = 0.021 ) but not lm %
retention ( u = 75.5 , p = 0.39 ) compared to
rpda. this study is the first to directly compare
cognition between pd patients with and without
anxiety . the findings confirmed our hypothesis
that anxiety negatively influences attentional set
- shifting and working memory in pd . more
specifically , we found that pd patients with
anxiety were more impaired on the trail making
test part b which assessed attentional set -
shifting , on both digit span tests which assessed
working memory and attention , and to a lesser
extent on the logical memory test which assessed
memory and new verbal learning compared to pd
patients without anxiety . taken together , these
findings suggest that anxiety in pd may reduce
processing capacity and impair processing
efficiency , especially in the central executive
and attentional systems of working memory in a
similar way as seen in young healthy adults [ 26 ,
27 ] . although the neurobiology of anxiety in pd
remains unknown , many researchers have postulated
that anxiety disorders are related to
neurochemical changes that occur during the early
, premotor stages of pd - related degeneration [
37 , 38 ] such as nigrostriatal dopamine depletion
, as well as cell loss within serotonergic and
noradrenergic brainstem nuclei ( i.e. , raphe
nuclei and locus coeruleus , resp . , which
provide massive inputs to corticolimbic regions )
. over time , chronic dysregulation of
adrenocortical and catecholamine functions can
lead to hippocampal damage as well as
dysfunctional prefrontal neural circuitries [ 39 ,
40 ] , which play a key role in memory and
attention . recent functional neuroimaging work
has suggested that enhanced hippocampal activation
during executive functioning and working memory
tasks may represent compensatory processes for
impaired frontostriatal functions in pd patients
compared to controls . therefore , chronic stress
from anxiety , for example , may disrupt
compensatory processes in pd patients and explain
the cognitive impairments specifically in working
memory and attention seen in pd patients with
anxiety . it has also been suggested that
hyperactivation within the putamen may reflect a
compensatory striatal mechanism to maintain normal
working memory performance in pd patients ;
however , losing this compensatory activation has
been shown to contribute to poor working memory
performance . anxiety in mild pd has been linked
to reduced putamen dopamine uptake which becomes
more extensive as the disease progresses . this
further supports the notion that anxiety may
disrupt compensatory striatal mechanisms as well ,
providing another possible explanation for the
cognitive impairments observed in pd patients with
anxiety in this study . noradrenergic and
serotonergic systems should also be considered
when trying to explain the mechanisms by which
anxiety may influence cognition in pd . although
these neurotransmitter systems are relatively
understudied in pd cognition , treating the
noradrenergic and serotonergic systems has shown
beneficial effects on cognition in pd . selective
serotonin reuptake inhibitor , citalopram , was
shown to improve response inhibition deficits in
pd , while noradrenaline reuptake blocker ,
atomoxetine , has been recently reported to have
promising effects on cognition in pd [ 45 , 46 ] .
overall , very few neuroimaging studies have been
conducted in pd in order to understand the neural
correlates of pd anxiety and its underlying neural
pathology . future research should focus on
relating anatomical changes and neurochemical
changes to neural activation in order to gain a
clearer understanding on how these pathologies
affect anxiety in pd . to further understand how
anxiety and cognitive dysfunction are related ,
future research should focus on using advanced
structural and function imaging techniques to
explain both cognitive and neural breakdowns that
are associated with anxiety in pd patients .
research has indicated that those with amnestic
mild cognitive impairment who have more
neuropsychiatric symptoms have a greater risk of
developing dementia compared to those with fewer
neuropsychiatric symptoms . future studies should
also examine whether treating neuropsychiatric
symptoms might impact the progression of cognitive
decline and improve cognitive impairments in pd
patients . previous studies have used pd symptom
laterality as a window to infer asymmetrical
dysfunction of neural circuits . for example , lpd
patients have greater inferred right hemisphere
pathology , whereas rpd patients have greater
inferred left hemisphere pathology . thus ,
cognitive domains predominantly subserved by the
left hemisphere ( e.g. , verbally mediated tasks
of executive function and verbal memory ) might be
hypothesized to be more affected in rpd than lpd ;
however , this remains controversial . it has also
been suggested that since anxiety is a common
feature of left hemisphere involvement [ 48 , 49 ]
, cognitive domains subserved by the left
hemisphere may also be more strongly related to
anxiety . results from this study showed selective
verbal memory deficits in rpd patients with
anxiety compared to rpd without anxiety , whereas
lpd patients with anxiety had greater attentional
/ working memory deficits compared to lpd without
anxiety . although these results align with
previous research , interpretations of these
findings should be made with caution due to the
small sample size in the lpd comparison
specifically . recent work has suggested that the
hads questionnaire may underestimate the burden of
anxiety related symptomology and therefore be a
less sensitive measure of anxiety in pd [ 30 , 50
] . in addition , our small sample size also
limited the statistical power for detecting
significant findings . based on these limitations
, our findings are likely conservative and
underrepresent the true impact anxiety has on
cognition in pd . additionally , the current study
employed a very brief neuropsychological
assessment including one or two tests for each
cognitive domain . future studies are encouraged
to collect a more complex and comprehensive
battery from a larger sample of pd participants in
order to better understand the role anxiety plays
on cognition in pd . another limitation of this
study was the absence of diagnostic interviews to
characterize participants ' psychiatric symptoms
and specify the type of anxiety disorders included
in this study . future studies should perform
diagnostic interviews with participants ( e.g. ,
using dsm - v criteria ) rather than relying on
self - reported measures to group participants ,
in order to better understand whether the type of
anxiety disorder ( e.g. , social anxiety , phobias
, panic disorders , and generalized anxiety )
influences cognitive performance differently in pd
. one advantage the hads questionnaire provided
over other anxiety scales was that it assessed
both anxiety and depression simultaneously and
allowed us to control for coexisting depression .
although there was a trend that the pda+ group
self - reported higher levels of depression than
the pda group , all participants included in the
study scored < 6 on the depression subscale of the
hads . controlling for depression while assessing
anxiety has been identified as a key shortcoming
in the majority of recent work . considering many
previous studies have investigated the influence
of depression on cognition in pd without
accounting for the presence of anxiety and the
inconsistent findings reported to date , we
recommend that future research should try to
disentangle the influence of anxiety versus
depression on cognitive impairments in pd .
considering the growing number of clinical trials
for treating depression , there are few if any for
the treatment of anxiety in pd . anxiety is a key
contributor to decreased quality of life in pd and
greatly requires better treatment options .
moreover , anxiety has been suggested to play a
key role in freezing of gait ( fog ) , which is
also related to attentional set - shifting [ 52 ,
53 ] . future research should examine the link
between anxiety , set - shifting , and fog , in
order to determine whether treating anxiety might
be a potential therapy for improving fog ."""
import torch
from transformers import AutoTokenizer, LongT5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps")
input_ids = tokenizer(LONG_ARTICLE, return_tensors="pt").input_ids.to("cuda")
model = LongT5ForConditionalGeneration.from_pretrained("Stancld/longt5-tglobal-large-16384-pubmed-3k_steps", return_dict_in_generate=True).to("cuda")
sequences = model.generate(input_ids).sequences
summary = tokenizer.batch_decode(sequences)
``` |
BatuhanYilmaz/bert-finetuned-nerxD | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-10T12:36:40Z | ---
library_name: keras
tags:
- image-classification
- computer-vision
- consistency-regularization
- cifar10
---
## Model description
### Consistency training with supervision
[Keras Example Link](https://keras.io/examples/vision/consistency_training/)
In this example, we have trained an image classification model enforcing a sense of consistency inside it by doing the following:
- Train a standard image classification model.
- Train an equal or larger model on a noisy version of the dataset (augmented using RandAugment).
- To do this, we will first obtain predictions of the previous model on the clean images of the dataset.
- We will then use these predictions and train the second model to match these predictions on the noisy variant of the same images. This is identical to the workflow of Knowledge Distillation but since the student model is equal or larger in size this process is also referred to as Self-Training.
This overall training workflow finds its roots in works like FixMatch, Unsupervised Data Augmentation for Consistency Training, and Noisy Student Training. Since this training process encourages a model yield consistent predictions for clean as well as noisy images, it's often referred to as consistency training or training with consistency regularization. Although the example focuses on using consistency training to enhance the robustness of models to common corruptions this example can also serve a template for performing weakly supervised learning.
Full Credits to <a href = "https://twitter.com/RisingSayak" target='_blank'> Sayak Paul </a> for this work.
This repo contains only the <b> Teacher Model </b> of this training example.
<b>Student Model </b>Repo can be find at this <a href = "" target='_blank'> Link </a>.
## Intended uses & limitations
More information needed
## Training and evaluation data
Trained and evaluated on [CIFAR-10](https://keras.io/api/datasets/cifar10/) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | optimizer | average_period | start_averaging | training_precision |
|----|---------|--------------|---------------|------------------|
|SWA|{'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': 1.0000001e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}}|10|0|float32|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28 | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
library_name: keras
tags:
- image-classification
- computer-vision
- consistency-regularization
- cifar10
---
## Model description
### Consistency training with supervision
[Keras Example Link](https://keras.io/examples/vision/consistency_training/)
In this example, we have trained an image classification model enforcing a sense of consistency inside it by doing the following:
- Train a standard image classification model.
- Train an equal or larger model on a noisy version of the dataset (augmented using RandAugment).
- To do this, we will first obtain predictions of the previous model on the clean images of the dataset.
- We will then use these predictions and train the second model to match these predictions on the noisy variant of the same images. This is identical to the workflow of Knowledge Distillation but since the student model is equal or larger in size this process is also referred to as Self-Training.
This overall training workflow finds its roots in works like FixMatch, Unsupervised Data Augmentation for Consistency Training, and Noisy Student Training. Since this training process encourages a model yield consistent predictions for clean as well as noisy images, it's often referred to as consistency training or training with consistency regularization. Although the example focuses on using consistency training to enhance the robustness of models to common corruptions this example can also serve a template for performing weakly supervised learning.
Full Credits to <a href = "https://twitter.com/RisingSayak" target='_blank'> Sayak Paul </a> for this work.
This repo contains only the <b>Student Model</b> of this training example.
<b>Teacher Model </b>Repo can be find at this <a href = "" target='_blank'> Link </a>.
## Intended uses & limitations
More information needed
## Training and evaluation data
Trained and evaluated on [CIFAR-10](https://keras.io/api/datasets/cifar10/) dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| name | optimizer | average_period | start_averaging | training_precision |
|----|---------|--------------|---------------|------------------|
|SWA|{'class_name': 'Adam', 'config': {'name': 'Adam', 'learning_rate': 3.9063e-06, 'decay': 0.5, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}}|10|0|float32|
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
BatuhanYilmaz/dummy-model | [
"tf",
"camembert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./models/clip-roberta](https://huggingface.co/./models/clip-roberta) on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-06-10T12:49:12Z | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "ๆฅๆฌใซ็ใใใ[MASK]ใ่จชใญใชใใใ"
---
# deberta-large-japanese-unidic
## Model Description
This is a DeBERTa(V2) model pre-trained on ้็ฉบๆๅบซ texts with BertJapaneseTokenizer. You can fine-tune `deberta-large-japanese-unidic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic")
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
|
BatuhanYilmaz/mlm-finetuned-imdb | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
---
# deberta-large-japanese-unidic-luw-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on ้็ฉบๆๅบซ texts for POS-tagging and dependency-parsing, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
s="ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-large-japanese-unidic-luw-upos")
print(nlp("ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"))
```
[fugashi](https://pypi.org/project/fugashi), [unidic-lite](https://pypi.org/project/unidic-lite) and [pytokenizations](https://pypi.org/project/pytokenizations) are required.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
Baybars/wav2vec2-xls-r-1b-turkish | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
model-index:
- name: camembert-base-finetuned-LineCause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-LineCause
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- F1: 1.0
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|
| 0.0428 | 1.0 | 4409 | 0.0002 | 1.0 | 1.0 | 1.0 |
| 0.0009 | 2.0 | 8818 | 0.0001 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6653 with parameters:
```
{'batch_size': 75, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`bpr_loss.BPRLossFunction`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Bharathdamu/wav2vec2-model-hindi-stt | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 817.50 +/- 327.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga meln1k -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga meln1k
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BigSalmon/BestMask2 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/smallmutuals/1654888348503/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433527116948180999/wejtDhFm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cool Owl Guy</div>
<div style="text-align: center; font-size: 14px;">@smallmutuals</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Cool Owl Guy.
| Data | Cool Owl Guy |
| --- | --- |
| Tweets downloaded | 367 |
| Retweets | 45 |
| Short tweets | 25 |
| Tweets kept | 297 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238iiiu5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @smallmutuals's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hl8vi9y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/smallmutuals')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/FormalBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/jana_aych_ess/1654888920998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1169751139409117185/BU60y7P5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jana 'All Cops Are Bastards' H-S (they/them)</div>
<div style="text-align: center; font-size: 14px;">@jana_aych_ess</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jana 'All Cops Are Bastards' H-S (they/them).
| Data | Jana 'All Cops Are Bastards' H-S (they/them) |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 343 |
| Short tweets | 148 |
| Tweets kept | 2743 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3q5i1d01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jana_aych_ess's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uy7dmw6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uy7dmw6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jana_aych_ess')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/FormalBerta3 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distilBERT_mLM_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distilBERT_mLM_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_V2](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
BigSalmon/GPT2HardandEasy | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446572046679302144/jF9HS_Yd_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ninja Sex Party</div>
<div style="text-align: center; font-size: 14px;">@ninjasexparty</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ninja Sex Party.
| Data | Ninja Sex Party |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 631 |
| Short tweets | 439 |
| Tweets kept | 2180 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ik0ji2l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ninjasexparty's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1jyhmzsa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ninjasexparty')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/GPTNeo350MInformalToFormalLincoln4 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.8653
- F1: 0.8656
- Precision: 0.8665
- Recall: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9172 | 1.0 | 69 | 0.5124 | 0.8246 | 0.8220 | 0.8271 | 0.8246 |
| 0.4709 | 2.0 | 138 | 0.4279 | 0.8528 | 0.8505 | 0.8588 | 0.8528 |
| 0.3194 | 3.0 | 207 | 0.3770 | 0.8737 | 0.8727 | 0.8740 | 0.8737 |
| 0.2459 | 4.0 | 276 | 0.3951 | 0.8685 | 0.8682 | 0.8692 | 0.8685 |
| 0.1824 | 5.0 | 345 | 0.4005 | 0.8831 | 0.8834 | 0.8841 | 0.8831 |
| 0.1515 | 6.0 | 414 | 0.4356 | 0.8800 | 0.8797 | 0.8801 | 0.8800 |
| 0.1274 | 7.0 | 483 | 0.4642 | 0.8727 | 0.8726 | 0.8731 | 0.8727 |
| 0.0833 | 8.0 | 552 | 0.5226 | 0.8633 | 0.8627 | 0.8631 | 0.8633 |
| 0.073 | 9.0 | 621 | 0.5327 | 0.8695 | 0.8686 | 0.8692 | 0.8695 |
| 0.0575 | 10.0 | 690 | 0.5908 | 0.8653 | 0.8656 | 0.8665 | 0.8653 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
BigSalmon/GPTNeo350MInformalToFormalLincoln6 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_V5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_V5
This model is a fine-tuned version of [FritzOS/TEdetection_distilBERT_mLM_V5](https://huggingface.co/FritzOS/TEdetection_distilBERT_mLM_V5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0029
- Validation Loss: 0.0032
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0029 | 0.0032 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
BigSalmon/GoodMaskResults | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2ndfinetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2ndfinetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2085
- Accuracy: 0.9333
- F1: 0.9319
- Precision: 0.9336
- Recall: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4825 | 1.0 | 13 | 0.2988 | 0.8848 | 0.8827 | 0.9056 | 0.8848 |
| 0.2652 | 2.0 | 26 | 0.2435 | 0.9212 | 0.9216 | 0.9282 | 0.9212 |
| 0.168 | 3.0 | 39 | 0.2120 | 0.9515 | 0.9501 | 0.9524 | 0.9515 |
| 0.1593 | 4.0 | 52 | 0.1962 | 0.9333 | 0.9330 | 0.9366 | 0.9333 |
| 0.1294 | 5.0 | 65 | 0.1855 | 0.9333 | 0.9334 | 0.9355 | 0.9333 |
| 0.1065 | 6.0 | 78 | 0.1780 | 0.9394 | 0.9393 | 0.9399 | 0.9394 |
| 0.0908 | 7.0 | 91 | 0.1967 | 0.9394 | 0.9388 | 0.9388 | 0.9394 |
| 0.0432 | 8.0 | 104 | 0.2085 | 0.9333 | 0.9319 | 0.9336 | 0.9333 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln22 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: en
---
# LFTW R1 Target
The R1 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
BigSalmon/MrLincoln2 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/jedwill1999/1654902604867/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510152678919135250/lfEmlEGJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">a local</div>
<div style="text-align: center; font-size: 14px;">@jedwill1999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from a local.
| Data | a local |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 1080 |
| Short tweets | 525 |
| Tweets kept | 1641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qsnsp6t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jedwill1999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mjjc73pu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jedwill1999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/NEO125InformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/froliki2108/1654905851117/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447692349493100549/1PV2c-PJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Froliki๐๐๐</div>
<div style="text-align: center; font-size: 14px;">@froliki2108</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Froliki๐๐๐.
| Data | Froliki๐๐๐ |
| --- | --- |
| Tweets downloaded | 2223 |
| Retweets | 1133 |
| Short tweets | 229 |
| Tweets kept | 861 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2tug3miv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @froliki2108's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3otsf5pj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/froliki2108')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/ParaphraseParentheses | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/tonebot_/1654906535396/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1447253318380793858/VVNhWBGI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tone bot</div>
<div style="text-align: center; font-size: 14px;">@tonebot_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tone bot.
| Data | tone bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 537 |
| Tweets kept | 2713 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ot29sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tonebot_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g614pb8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tonebot_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/PhraseBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: SCRATCH_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCRATCH_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5583
- Otaku Benchmark VN BLEU: 19.12
- Otaku Benchmark LN BLEU: 11.55
- Otaku Benchmark MANGA BLEU: 12.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.0252 | 0.02 | 2000 | 2.4140 |
| 2.8406 | 0.03 | 4000 | 2.2819 |
| 2.7505 | 0.05 | 6000 | 2.3018 |
| 2.6948 | 0.06 | 8000 | 2.1931 |
| 2.6408 | 0.08 | 10000 | 2.1724 |
| 2.6004 | 0.09 | 12000 | 2.1583 |
| 2.5685 | 0.11 | 14000 | 2.1203 |
| 2.5432 | 0.12 | 16000 | 2.1593 |
| 2.5153 | 0.14 | 18000 | 2.1009 |
| 2.4906 | 0.15 | 20000 | 2.0899 |
| 2.4709 | 0.17 | 22000 | 2.0512 |
| 2.4471 | 0.18 | 24000 | 2.0208 |
| 2.4295 | 0.2 | 26000 | 2.0773 |
| 2.4154 | 0.21 | 28000 | 2.0441 |
| 2.4008 | 0.23 | 30000 | 2.0235 |
| 2.3834 | 0.24 | 32000 | 2.0190 |
| 2.3709 | 0.26 | 34000 | 1.9831 |
| 2.3537 | 0.27 | 36000 | 1.9870 |
| 2.3486 | 0.29 | 38000 | 1.9692 |
| 2.3346 | 0.3 | 40000 | 1.9517 |
| 2.3195 | 0.32 | 42000 | 1.9800 |
| 2.3104 | 0.33 | 44000 | 1.9676 |
| 2.298 | 0.35 | 46000 | 1.9563 |
| 2.2905 | 0.36 | 48000 | 1.9217 |
| 2.2792 | 0.38 | 50000 | 1.9195 |
| 2.2714 | 0.39 | 52000 | 1.9109 |
| 2.2593 | 0.41 | 54000 | 1.9044 |
| 2.2582 | 0.42 | 56000 | 1.8876 |
| 2.2482 | 0.44 | 58000 | 1.8860 |
| 2.2394 | 0.45 | 60000 | 1.8887 |
| 2.2273 | 0.47 | 62000 | 1.8862 |
| 2.2255 | 0.48 | 64000 | 1.8705 |
| 2.2166 | 0.5 | 66000 | 1.8696 |
| 2.2075 | 0.51 | 68000 | 1.8657 |
| 2.1992 | 0.53 | 70000 | 1.8585 |
| 2.1969 | 0.54 | 72000 | 1.8526 |
| 2.1894 | 0.56 | 74000 | 1.8493 |
| 2.1817 | 0.57 | 76000 | 1.8480 |
| 2.1771 | 0.59 | 78000 | 1.8333 |
| 2.1683 | 0.6 | 80000 | 1.8342 |
| 2.1667 | 0.62 | 82000 | 1.8537 |
| 2.1546 | 0.63 | 84000 | 1.8261 |
| 2.1467 | 0.65 | 86000 | 1.8092 |
| 2.1421 | 0.66 | 88000 | 1.8137 |
| 2.1395 | 0.68 | 90000 | 1.8286 |
| 2.1313 | 0.69 | 92000 | 1.8042 |
| 2.1241 | 0.71 | 94000 | 1.7934 |
| 2.1214 | 0.72 | 96000 | 1.7940 |
| 2.12 | 0.74 | 98000 | 1.8064 |
| 2.1096 | 0.75 | 100000 | 1.7983 |
| 2.1035 | 0.77 | 102000 | 1.8089 |
| 2.0937 | 0.78 | 104000 | 1.7941 |
| 2.0893 | 0.8 | 106000 | 1.7791 |
| 2.0869 | 0.81 | 108000 | 1.7807 |
| 2.0845 | 0.83 | 110000 | 1.7852 |
| 2.0782 | 0.84 | 112000 | 1.7675 |
| 2.0755 | 0.86 | 114000 | 1.7756 |
| 2.0657 | 0.87 | 116000 | 1.7604 |
| 2.0614 | 0.89 | 118000 | 1.7447 |
| 2.0591 | 0.9 | 120000 | 1.7489 |
| 2.0586 | 0.92 | 122000 | 1.7550 |
| 2.0498 | 0.93 | 124000 | 1.7543 |
| 2.0455 | 0.95 | 126000 | 1.7510 |
| 2.04 | 0.96 | 128000 | 1.7439 |
| 2.0385 | 0.98 | 130000 | 1.7407 |
| 2.0267 | 0.99 | 132000 | 1.7467 |
| 2.0088 | 1.01 | 134000 | 1.7455 |
| 1.9826 | 1.02 | 136000 | 1.7210 |
| 1.9785 | 1.04 | 138000 | 1.7524 |
| 1.9777 | 1.05 | 140000 | 1.7272 |
| 1.9763 | 1.07 | 142000 | 1.7283 |
| 1.9736 | 1.08 | 144000 | 1.7210 |
| 1.9704 | 1.1 | 146000 | 1.7001 |
| 1.9625 | 1.11 | 148000 | 1.7112 |
| 1.9665 | 1.13 | 150000 | 1.7236 |
| 1.9592 | 1.14 | 152000 | 1.7169 |
| 1.9606 | 1.16 | 154000 | 1.6962 |
| 1.9571 | 1.17 | 156000 | 1.7064 |
| 1.9532 | 1.19 | 158000 | 1.6898 |
| 1.9465 | 1.2 | 160000 | 1.7004 |
| 1.9438 | 1.22 | 162000 | 1.7092 |
| 1.9435 | 1.23 | 164000 | 1.6927 |
| 1.9361 | 1.25 | 166000 | 1.6838 |
| 1.9369 | 1.26 | 168000 | 1.6784 |
| 1.9287 | 1.28 | 170000 | 1.6709 |
| 1.928 | 1.29 | 172000 | 1.6735 |
| 1.9227 | 1.31 | 174000 | 1.6689 |
| 1.9213 | 1.32 | 176000 | 1.6685 |
| 1.9152 | 1.34 | 178000 | 1.6635 |
| 1.9092 | 1.35 | 180000 | 1.6561 |
| 1.9059 | 1.37 | 182000 | 1.6673 |
| 1.9094 | 1.38 | 184000 | 1.6717 |
| 1.9006 | 1.4 | 186000 | 1.6593 |
| 1.8956 | 1.41 | 188000 | 1.6483 |
| 1.8972 | 1.43 | 190000 | 1.6635 |
| 1.8907 | 1.44 | 192000 | 1.6604 |
| 1.8885 | 1.46 | 194000 | 1.6465 |
| 1.8844 | 1.47 | 196000 | 1.6444 |
| 1.8799 | 1.49 | 198000 | 1.6307 |
| 1.8813 | 1.5 | 200000 | 1.6240 |
| 1.8693 | 1.52 | 202000 | 1.6102 |
| 1.8768 | 1.53 | 204000 | 1.6197 |
| 1.8678 | 1.55 | 206000 | 1.6275 |
| 1.8588 | 1.56 | 208000 | 1.6183 |
| 1.8585 | 1.58 | 210000 | 1.6197 |
| 1.8564 | 1.59 | 212000 | 1.6004 |
| 1.8493 | 1.61 | 214000 | 1.6078 |
| 1.85 | 1.62 | 216000 | 1.6001 |
| 1.8428 | 1.64 | 218000 | 1.6106 |
| 1.8428 | 1.65 | 220000 | 1.5866 |
| 1.8423 | 1.67 | 222000 | 1.5993 |
| 1.8352 | 1.68 | 224000 | 1.6052 |
| 1.8385 | 1.7 | 226000 | 1.5959 |
| 1.8307 | 1.71 | 228000 | 1.6024 |
| 1.8248 | 1.73 | 230000 | 1.5969 |
| 1.82 | 1.74 | 232000 | 1.5878 |
| 1.8254 | 1.76 | 234000 | 1.5934 |
| 1.8188 | 1.77 | 236000 | 1.5827 |
| 1.813 | 1.79 | 238000 | 1.5797 |
| 1.8128 | 1.8 | 240000 | 1.5758 |
| 1.8044 | 1.82 | 242000 | 1.5752 |
| 1.808 | 1.83 | 244000 | 1.5818 |
| 1.8025 | 1.85 | 246000 | 1.5772 |
| 1.7992 | 1.86 | 248000 | 1.5738 |
| 1.8021 | 1.88 | 250000 | 1.5752 |
| 1.7988 | 1.89 | 252000 | 1.5717 |
| 1.7967 | 1.91 | 254000 | 1.5690 |
| 1.7909 | 1.92 | 256000 | 1.5607 |
| 1.7942 | 1.94 | 258000 | 1.5618 |
| 1.7897 | 1.95 | 260000 | 1.5585 |
| 1.7871 | 1.97 | 262000 | 1.5576 |
| 1.7843 | 1.98 | 264000 | 1.5577 |
| 1.7888 | 2.0 | 266000 | 1.5583 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
BlightZz/DialoGPT-medium-Kurisu | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
tags:
- conversational
---
# Harry Potter DialoGPT Model |
Botslity/Bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-2nd-finetune-epru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-2nd-finetune-epru
This model is a fine-tuned version of [mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented](https://huggingface.co/mmillet/distilrubert-tiny-cased-conversational-v1_single_finetuned_on_cedr_augmented) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3546
- Accuracy: 0.9325
- F1: 0.9328
- Precision: 0.9359
- Recall: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0686 | 1.0 | 12 | 0.2931 | 0.9141 | 0.9142 | 0.9163 | 0.9141 |
| 0.0269 | 2.0 | 24 | 0.2690 | 0.9448 | 0.9444 | 0.9449 | 0.9448 |
| 0.0282 | 3.0 | 36 | 0.3140 | 0.9141 | 0.9140 | 0.9168 | 0.9141 |
| 0.0185 | 4.0 | 48 | 0.2977 | 0.9571 | 0.9570 | 0.9576 | 0.9571 |
| 0.0103 | 5.0 | 60 | 0.3368 | 0.9264 | 0.9265 | 0.9296 | 0.9264 |
| 0.0088 | 6.0 | 72 | 0.3067 | 0.9387 | 0.9385 | 0.9389 | 0.9387 |
| 0.0152 | 7.0 | 84 | 0.3660 | 0.9264 | 0.9263 | 0.9282 | 0.9264 |
| 0.0315 | 8.0 | 96 | 0.3793 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
| 0.0258 | 9.0 | 108 | 0.3546 | 0.9325 | 0.9328 | 0.9359 | 0.9325 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Branex/gpt-neo-2.7B | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: es
tags:
- sagemaker
- vit
- ImageClassification
- generated_from_trainer
license: apache-2.0
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: vit_base-224-in21k-ft-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: "Cifar10"
type: cifar10
metrics:
- name: Accuracy
type: accuracy
value: 0.97
---
# Model vit_base-224-in21k-ft-cifar10
## **A finetuned model for Image classification in Spanish**
This model was trained using Amazon SageMaker and the Hugging Face Deep Learning container,
The base model is **Vision Transformer (base-sized model)** which is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.[Link to base model](https://huggingface.co/google/vit-base-patch16-224-in21k)
## Base model citation
### BibTeX entry and citation info
```bibtex
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Dataset
[Link to dataset description](http://www.cs.toronto.edu/~kriz/cifar.html)
The CIFAR-10 and CIFAR-100 are labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
Sizes of datasets:
- Train dataset: 50,000
- Test dataset: 10,000
## Intended uses & limitations
This model is intented for Image Classification.
## Hyperparameters
{
"epochs": "5",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "1e-05",
}
## Test results
- Accuracy = 0.97
## Model in action
### Usage for Image Classification
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('edumunozsala/vit_base-224-in21k-ft-cifar10')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Created by [Eduardo Muรฑoz/@edumunozsala](https://github.com/edumunozsala)
|
CALM/backup | [
"lean_albert",
"transformers"
] | null | {
"architectures": [
"LeanAlbertForPretraining",
"LeanAlbertForTokenClassification",
"LeanAlbertForSequenceClassification"
],
"model_type": "lean_albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ๅจbert-base-chineseๅบ็กไธ่ฟ่กๆฐ้ป่ฏญๆๅบ็ๅข้้ข่ฎญ็ป็ๆจกๅ๏ผtoken้็จ็ๆฏbert-base-chinese
Model
ๆจกๅๅฏผๅบๆถๅฐ็ๆ config.json ๅ pytorch_model.bin ๅๆฐๆไปถ
Tokenizer
่ฟๆฏไธไธชๅฐ็บฏๆๆฌ่ฝฌๆขไธบ็ผ็ ็่ฟ็จใๆณจๆ๏ผTokenizer ๅนถไธๆถๅๅฐ่ฏ่ฝฌๅไธบ่ฏๅ้็่ฟ็จ๏ผไป
ไป
ๆฏๅฐ็บฏๆๆฌๅ่ฏ๏ผๆทปๅ [MASK]ๆ ่ฎฐใ[SEP]ใ[CLS]ๆ ่ฎฐ๏ผๅนถ่ฝฌๆขไธบๅญๅ
ธ็ดขๅผใTokenizer ็ฑปๅฏผๅบๆถๅฐๅไธบไธไธชๆไปถ
vocab.txt ่ฏๅ
ธๆไปถ๏ผๆฏไธ่กไธบไธไธช่ฏๆ่ฏ็ไธ้จๅ
special_tokens_map.json ็นๆฎๆ ่ฎฐ็ๅฎไนๆนๅผ
tokenizer_config.json ้
็ฝฎๆไปถ๏ผไธป่ฆๅญๅจ็นๆฎ็้
็ฝฎ
ๆจกๅ็ๆๆๅ่ฏๅจ้ฝๆฏๅจ PreTrainedTokenizer ไธญๅฎ็ฐ็๏ผๅ่ฏ็็ปๆไธป่ฆๆไปฅไธๅ
ๅฎน๏ผ
"input ids": ้กพๅๆไน๏ผๆฏๅ่ฏๅจ่ฏๅ
ธไธญ็็ผ็
"token type ids":ๅบๅไธคไธชๅฅๅญ็็ผ็
"attention mask":ๆๅฎๅฏนๅชไบ่ฏ่ฟ่กself-Attentionๆไฝ
"overflowing tokens":ๅฝๆๅฎๆๅคง้ฟๅบฆๆถ๏ผๆบขๅบ็ๅ่ฏ
"num truncated tokens":ๆบขๅบ็tokenๆฐ้
"return special tokens mask":ๅฆๆๆทปๅ ็นๆฎๆ ่ฎฐ๏ผๅ่ฟๆฏ[0๏ผ1]็ๅ่กจ๏ผๅ
ถไธญ0ๆๅฎ็นๆฎๆทปๅ ็ๆ ่ฎฐ๏ผ่1ๆๅฎๅบๅๆ ่ฎฐ |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 240.31 +/- 12.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 73 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5324115893962171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7035
- Matthews Correlation: 0.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.785228097724678e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5005 | 0.4121 |
| 0.318 | 2.0 | 1070 | 0.5265 | 0.4977 |
| 0.1887 | 3.0 | 1605 | 0.7035 | 0.5324 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 574 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-ksponspeech
results: []
---
# wav2vec2-ksponspeech
This model is a fine-tuned version of [Wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- **WER(Word Error Rate)** for Third party test data : 0.373
**For improving WER:**
- Numeric / Character Unification
- Decoding the word with the correct notation (from word based on pronounciation)
- Uniform use of special characters (. / ?)
- Converting non-existent words to existing words
## Model description
Korean Wav2vec with Ksponspeech dataset.
This model was trained by two dataset :
- Train1 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train (1 ~ 20000th data in Ksponspeech)
- Train2 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train2 (20100 ~ 40100th data in Ksponspeech)
- Validation : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (20000 ~ 20100th data in Ksponspeech)
- Third party test : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (60000 ~ 20100th data in Ksponspeech)
### Hardward Specification
- GPU : GEFORCE RTX 3080ti 12GB
- CPU : Intel i9-12900k
- RAM : 32GB
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Canadiancaleb/DialoGPT-small-walter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 0.78 +/- 0.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tjscollins/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Canadiancaleb/jessebot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training, development, and test data are used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-based LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition/)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Canyonevo/DialoGPT-medium-KingHenry | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Capreolus/bert-base-msmarco | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 238 | 2022-06-11T20:30:24Z | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (training, development, and test data are used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-less LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TB-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB-partition/)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Capreolus/birch-bert-large-msmarco_mb | [
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"transformers"
] | null | {
"architectures": [
"BertForNextSentencePrediction"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
The model is trained using Hurricane Dorian 2019 event (only the training data is used for training) from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the Type-based LMR mode and using the random version of the data.
You can download this data in BILOU format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/hurricane_dorian_2019).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-HD-TB](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TB)
- [rsuwaileh/IDRISI-LMR-HD-TL](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL)
- [rsuwaileh/IDRISI-LMR-HD-TL-partition](https://huggingface.co/rsuwaileh/IDRISI-LMR-HD-TL-partition/)
* Larger models are available at [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/).
* Models trained on the entire IDRISI-R dataset:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
To cite this model:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Capreolus/electra-base-msmarco | [
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
] | text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 110 | null | ---
tags:
- FrozenLake-v1-4x4-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-slippery
results:
- metrics:
- type: mean_reward
value: 0.75 +/- 0.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-4x4
type: FrozenLake-v1-4x4-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tjscollins/q-FrozenLake-v1-4x4-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
CarlosPR/mt5-spanish-memmories-analysis | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: music-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# music-generation
This model a trained from scratch version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset where the text represents musical notes. The [dataset](https://www.kaggle.com/datasets/soumikrakshit/classical-music-midi) consists of one stream of notes from MIDI files (the stream with most notes), where all of the melodies were transposed either to C major or A minor. Also, the BPM of the song is ignored, the duration of each note is based on its quarter length.
Each element in the melody is represented by a series of letters and numbers with the following structure.
* For a note: ns[pitch of the note as a string]s[duration]
* Examples: nsC4s0p25, nsF7s1p0,
* For a rest: rs[duration]:
* Examples: rs0p5, rs1q6
* For a chord: cs[number of notes in chord]s[pitches of chords separated by "s"]s[duration]
* Examples: cs2sE7sF7s1q3, cs2sG3sGw3s0p25
The following special symbols are replaced in the strings by the following:
* . = p
* / = q
* # =
* - = t
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Cdial/hausa-asr | [
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-06-11T21:33:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 26.8232
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1717
- Bleu: 26.8232
- Meteor: 0.172
- Gen Len: 12.1288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.7364 | 0.25 | 100 | 0.1731 | 27.2753 | 0.1729 | 12.0887 |
| 0.2175 | 0.5 | 200 | 0.1731 | 27.2055 | 0.1722 | 11.5675 |
| 0.2193 | 0.75 | 300 | 0.1722 | 27.3277 | 0.1798 | 12.1325 |
| 0.2321 | 1.0 | 400 | 0.1750 | 27.5152 | 0.1762 | 11.925 |
| 0.1915 | 1.25 | 500 | 0.1690 | 27.5043 | 0.1751 | 11.9038 |
| 0.1794 | 1.5 | 600 | 0.1719 | 26.8607 | 0.1713 | 11.8138 |
| 0.1741 | 1.75 | 700 | 0.1725 | 26.974 | 0.1724 | 11.8462 |
| 0.1732 | 2.0 | 800 | 0.1717 | 26.8232 | 0.172 | 12.1288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dccuchile/albert-base-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null | ---
library_name: stable-baselines3
tags:
- Sokoban-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -19.90 +/- 0.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Sokoban-v0
type: Sokoban-v0
---
# **PPO** Agent playing **Sokoban-v0**
This is a trained model of a **PPO** agent playing **Sokoban-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-base-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- un_multi
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-4000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: un_multi
type: un_multi
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 51.7715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-4000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1850
- Bleu: 51.7715
- Meteor: 0.5164
- Gen Len: 25.5612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.6999 | 0.25 | 100 | 0.1959 | 50.1492 | 0.508 | 25.2788 |
| 0.1994 | 0.5 | 200 | 0.1931 | 51.003 | 0.513 | 25.4038 |
| 0.1863 | 0.75 | 300 | 0.1864 | 51.3268 | 0.5145 | 25.1675 |
| 0.1826 | 1.0 | 400 | 0.1841 | 51.2507 | 0.513 | 25.2388 |
| 0.1494 | 1.25 | 500 | 0.1840 | 51.4291 | 0.5159 | 25.4225 |
| 0.1483 | 1.5 | 600 | 0.1839 | 51.2645 | 0.5126 | 25.395 |
| 0.1547 | 1.75 | 700 | 0.1837 | 51.7589 | 0.5157 | 25.48 |
| 0.1487 | 2.0 | 800 | 0.1845 | 51.896 | 0.5177 | 25.3988 |
| 0.1235 | 2.25 | 900 | 0.1852 | 52.0583 | 0.5177 | 25.5212 |
| 0.1164 | 2.5 | 1000 | 0.1850 | 51.7715 | 0.5164 | 25.5612 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dccuchile/albert-large-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-06-11T22:22:14Z | ---
tags:
- conversational
---
#A Peter DialoGPT Model |
dccuchile/albert-tiny-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-tiny-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | Access to model Abhijnan/AxomiyaBERTa is restricted and you are not in the authorized list. Visit https://huggingface.co/Abhijnan/AxomiyaBERTa to ask for access. |
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -140.18 +/- 41.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/albert-xlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/laserboat999/1654991516445/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1500274766195793921/bA4siut7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">donald boat</div>
<div style="text-align: center; font-size: 14px;">@laserboat999</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from donald boat.
| Data | donald boat |
| --- | --- |
| Tweets downloaded | 3233 |
| Retweets | 75 |
| Short tweets | 516 |
| Tweets kept | 2642 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38v40fpf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laserboat999's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pk1xum9h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laserboat999')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dccuchile/albert-xlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/cancer_blood69/1654992058711/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1273429972229804032/_kkJmwqw_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">cancer_blood69 (reanimated decaying corpse)</div>
<div style="text-align: center; font-size: 14px;">@cancer_blood69</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from cancer_blood69 (reanimated decaying corpse).
| Data | cancer_blood69 (reanimated decaying corpse) |
| --- | --- |
| Tweets downloaded | 3237 |
| Retweets | 215 |
| Short tweets | 381 |
| Tweets kept | 2641 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3cav70ew/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cancer_blood69's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/sp5449e2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/sp5449e2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cancer_blood69')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-xxlarge-spanish-finetuned-mldoc | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | 2022-06-12T00:34:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- un_multi
metrics:
- bleu
model-index:
- name: opus-mt-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: un_multi
type: un_multi
args: ar-en
metrics:
- name: Bleu
type: bleu
value: 53.0137
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-evaluated-en-to-ar-2000instances-un_multi-leaningRate2e-05-batchSize8-11-action-1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1873
- Bleu: 53.0137
- Meteor: 0.5005
- Gen Len: 25.845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|
| 0.6585 | 0.5 | 100 | 0.2085 | 52.5874 | 0.4969 | 25.485 |
| 0.1802 | 1.0 | 200 | 0.1788 | 52.9434 | 0.4982 | 25.1725 |
| 0.1501 | 1.5 | 300 | 0.1683 | 53.6994 | 0.5033 | 25.625 |
| 0.1454 | 2.0 | 400 | 0.1706 | 53.3946 | 0.5005 | 25.6675 |
| 0.1193 | 2.5 | 500 | 0.1774 | 53.2011 | 0.4982 | 25.58 |
| 0.1194 | 3.0 | 600 | 0.1741 | 53.8651 | 0.5026 | 25.5775 |
| 0.1002 | 3.5 | 700 | 0.1878 | 53.1332 | 0.5005 | 25.8975 |
| 0.0979 | 4.0 | 800 | 0.1881 | 52.5989 | 0.4974 | 25.485 |
| 0.0807 | 4.5 | 900 | 0.1873 | 53.0137 | 0.5005 | 25.845 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dccuchile/albert-xxlarge-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | 2022-06-12T00:34:54Z | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-xxlarge-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-xxlarge-spanish-finetuned-xnli | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-base-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 586 | 2022-06-12T00:52:56Z | ---
license: mit
datasets:
- MRBrainS18
language:
- en
metrics:
-
tags:
- MedicalNet
- medical images
- medical
- 3D
- Med3D
thumbnail: "https://github.com/Tencent/MedicalNet/blob/master/images/logo.png?raw=true"
---
# MedicalNet
This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625).
Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset with diverse modalities, target organs, and pathologies to to build relatively large datasets. Based on this dataset, a series of 3D-ResNet pre-trained models and corresponding transfer-learning training code are provided.
### License
MedicalNet is released under the MIT License (refer to the LICENSE file for detailso).
### Citing MedicalNet
If you use this code or pre-trained models, please cite the following:
```
@article{chen2019med3d,
title={Med3D: Transfer Learning for 3D Medical Image Analysis},
author={Chen, Sihong and Ma, Kai and Zheng, Yefeng},
journal={arXiv preprint arXiv:1904.00625},
year={2019}
}
```
### Update(2019/07/30)
We uploaded 4 pre-trained models based on more datasets (23 datasets).
```
Model name : parameters settings
resnet_10_23dataset.pth: --model resnet --model_depth 10 --resnet_shortcut B
resnet_18_23dataset.pth: --model resnet --model_depth 18 --resnet_shortcut A
resnet_34_23dataset.pth: --model resnet --model_depth 34 --resnet_shortcut A
resnet_50_23dataset.pth: --model resnet --model_depth 50 --resnet_shortcut B
```
Hugging Face repository contribution by:
[Rafael Zimmer](https://www.github.com/rzimmerdev) |
dccuchile/albert-tiny-spanish | [
"pytorch",
"tf",
"albert",
"pretraining",
"es",
"dataset:large_spanish_corpus",
"transformers",
"spanish",
"OpenCENIA"
] | null | {
"architectures": [
"AlbertForPreTraining"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 393 | 2022-06-13T23:23:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX2_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX2_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4929
- Otaku Benchmark VN BLEU: 20.21
- Otaku Benchmark LN BLEU: 13.29
- Otaku Benchmark MANGA BLEU: 19.07
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.8467 | 0.01 | 2000 | 2.3237 |
| 2.6439 | 0.02 | 4000 | 2.2542 |
| 2.547 | 0.03 | 6000 | 2.1956 |
| 2.4852 | 0.04 | 8000 | 2.1088 |
| 2.4408 | 0.05 | 10000 | 2.0909 |
| 2.404 | 0.06 | 12000 | 2.1029 |
| 2.3634 | 0.07 | 14000 | 2.0636 |
| 2.3491 | 0.08 | 16000 | 2.0312 |
| 2.3203 | 0.09 | 18000 | 2.0187 |
| 2.3002 | 0.1 | 20000 | 1.9999 |
| 2.2791 | 0.11 | 22000 | 1.9823 |
| 2.2607 | 0.11 | 24000 | 1.9588 |
| 2.2475 | 0.12 | 26000 | 1.9728 |
| 2.2308 | 0.13 | 28000 | 1.9330 |
| 2.2237 | 0.14 | 30000 | 1.9657 |
| 2.208 | 0.15 | 32000 | 1.9560 |
| 2.2019 | 0.16 | 34000 | 1.9704 |
| 2.1864 | 0.17 | 36000 | 1.9513 |
| 2.1764 | 0.18 | 38000 | 1.9534 |
| 2.163 | 0.19 | 40000 | 1.9140 |
| 2.1534 | 0.2 | 42000 | 1.9241 |
| 2.146 | 0.21 | 44000 | 1.9162 |
| 2.1403 | 0.22 | 46000 | 1.9030 |
| 2.1309 | 0.23 | 48000 | 1.8741 |
| 2.1174 | 0.24 | 50000 | 1.8834 |
| 2.1157 | 0.25 | 52000 | 1.8666 |
| 2.1116 | 0.26 | 54000 | 1.8870 |
| 2.1062 | 0.27 | 56000 | 1.8837 |
| 2.0994 | 0.28 | 58000 | 1.8638 |
| 2.0924 | 0.29 | 60000 | 1.8766 |
| 2.0874 | 0.3 | 62000 | 1.8712 |
| 2.0805 | 0.31 | 64000 | 1.8792 |
| 2.0746 | 0.32 | 66000 | 1.8586 |
| 2.0684 | 0.32 | 68000 | 1.8819 |
| 2.0678 | 0.33 | 70000 | 1.8529 |
| 2.061 | 0.34 | 72000 | 1.8219 |
| 2.0532 | 0.35 | 74000 | 1.8383 |
| 2.0536 | 0.36 | 76000 | 1.8273 |
| 2.0432 | 0.37 | 78000 | 1.8304 |
| 2.0386 | 0.38 | 80000 | 1.8208 |
| 2.0361 | 0.39 | 82000 | 1.8103 |
| 2.0353 | 0.4 | 84000 | 1.8193 |
| 2.0266 | 0.41 | 86000 | 1.8369 |
| 2.0277 | 0.42 | 88000 | 1.8266 |
| 2.0221 | 0.43 | 90000 | 1.8372 |
| 2.0181 | 0.44 | 92000 | 1.8436 |
| 2.0182 | 0.45 | 94000 | 1.8505 |
| 2.0088 | 0.46 | 96000 | 1.8127 |
| 2.005 | 0.47 | 98000 | 1.8325 |
| 2.0003 | 0.48 | 100000 | 1.8407 |
| 2.0031 | 0.49 | 102000 | 1.8140 |
| 1.9954 | 0.5 | 104000 | 1.8177 |
| 1.9894 | 0.51 | 106000 | 1.8072 |
| 1.9901 | 0.52 | 108000 | 1.7971 |
| 1.9864 | 0.53 | 110000 | 1.8007 |
| 1.9848 | 0.53 | 112000 | 1.7961 |
| 1.9774 | 0.54 | 114000 | 1.7933 |
| 1.9802 | 0.55 | 116000 | 1.8031 |
| 1.9698 | 0.56 | 118000 | 1.8137 |
| 1.973 | 0.57 | 120000 | 1.7930 |
| 1.9696 | 0.58 | 122000 | 1.7838 |
| 1.9641 | 0.59 | 124000 | 1.7730 |
| 1.9609 | 0.6 | 126000 | 1.7800 |
| 1.9605 | 0.61 | 128000 | 1.7680 |
| 1.9516 | 0.62 | 130000 | 1.7895 |
| 1.9529 | 0.63 | 132000 | 1.7825 |
| 1.9503 | 0.64 | 134000 | 1.7792 |
| 1.9528 | 0.65 | 136000 | 1.8031 |
| 1.9439 | 0.66 | 138000 | 1.7652 |
| 1.9453 | 0.67 | 140000 | 1.7713 |
| 1.9404 | 0.68 | 142000 | 1.7585 |
| 1.9399 | 0.69 | 144000 | 1.7454 |
| 1.9325 | 0.7 | 146000 | 1.7605 |
| 1.9327 | 0.71 | 148000 | 1.7608 |
| 1.9301 | 0.72 | 150000 | 1.7743 |
| 1.928 | 0.73 | 152000 | 1.7532 |
| 1.9286 | 0.74 | 154000 | 1.7682 |
| 1.9194 | 0.74 | 156000 | 1.7582 |
| 1.9247 | 0.75 | 158000 | 1.7601 |
| 1.9183 | 0.76 | 160000 | 1.7600 |
| 1.9138 | 0.77 | 162000 | 1.7555 |
| 1.9148 | 0.78 | 164000 | 1.7447 |
| 1.913 | 0.79 | 166000 | 1.7512 |
| 1.9084 | 0.8 | 168000 | 1.7408 |
| 1.9109 | 0.81 | 170000 | 1.7463 |
| 1.905 | 0.82 | 172000 | 1.7543 |
| 1.9067 | 0.83 | 174000 | 1.7662 |
| 1.9005 | 0.84 | 176000 | 1.7428 |
| 1.8997 | 0.85 | 178000 | 1.7500 |
| 1.8963 | 0.86 | 180000 | 1.7297 |
| 1.8938 | 0.87 | 182000 | 1.7356 |
| 1.8923 | 0.88 | 184000 | 1.7602 |
| 1.8896 | 0.89 | 186000 | 1.7426 |
| 1.8866 | 0.9 | 188000 | 1.7323 |
| 1.887 | 0.91 | 190000 | 1.7587 |
| 1.8855 | 0.92 | 192000 | 1.7591 |
| 1.8842 | 0.93 | 194000 | 1.7570 |
| 1.8808 | 0.94 | 196000 | 1.7311 |
| 1.8836 | 0.95 | 198000 | 1.7449 |
| 1.8761 | 0.96 | 200000 | 1.7534 |
| 1.8721 | 0.96 | 202000 | 1.7623 |
| 1.8765 | 0.97 | 204000 | 1.7462 |
| 1.8747 | 0.98 | 206000 | 1.7452 |
| 1.8667 | 0.99 | 208000 | 1.7303 |
| 1.8618 | 1.0 | 210000 | 1.7468 |
| 1.8475 | 1.01 | 212000 | 1.7443 |
| 1.8435 | 1.02 | 214000 | 1.7622 |
| 1.8452 | 1.03 | 216000 | 1.7153 |
| 1.84 | 1.04 | 218000 | 1.6976 |
| 1.8432 | 1.05 | 220000 | 1.7013 |
| 1.842 | 1.06 | 222000 | 1.7073 |
| 1.8428 | 1.07 | 224000 | 1.6991 |
| 1.841 | 1.08 | 226000 | 1.7477 |
| 1.8321 | 1.09 | 228000 | 1.7438 |
| 1.838 | 1.1 | 230000 | 1.7352 |
| 1.8339 | 1.11 | 232000 | 1.7242 |
| 1.836 | 1.12 | 234000 | 1.7221 |
| 1.8329 | 1.13 | 236000 | 1.7402 |
| 1.8337 | 1.14 | 238000 | 1.7083 |
| 1.8267 | 1.15 | 240000 | 1.7200 |
| 1.8335 | 1.16 | 242000 | 1.7092 |
| 1.8306 | 1.17 | 244000 | 1.7340 |
| 1.8279 | 1.17 | 246000 | 1.6983 |
| 1.8261 | 1.18 | 248000 | 1.6928 |
| 1.8295 | 1.19 | 250000 | 1.7135 |
| 1.8227 | 1.2 | 252000 | 1.7156 |
| 1.822 | 1.21 | 254000 | 1.7018 |
| 1.8216 | 1.22 | 256000 | 1.7157 |
| 1.8205 | 1.23 | 258000 | 1.7047 |
| 1.8163 | 1.24 | 260000 | 1.6988 |
| 1.8187 | 1.25 | 262000 | 1.7077 |
| 1.8188 | 1.26 | 264000 | 1.6859 |
| 1.8138 | 1.27 | 266000 | 1.6831 |
| 1.8173 | 1.28 | 268000 | 1.6887 |
| 1.813 | 1.29 | 270000 | 1.6967 |
| 1.8114 | 1.3 | 272000 | 1.7085 |
| 1.8057 | 1.31 | 274000 | 1.6885 |
| 1.8094 | 1.32 | 276000 | 1.7198 |
| 1.8079 | 1.33 | 278000 | 1.7036 |
| 1.8056 | 1.34 | 280000 | 1.7106 |
| 1.8044 | 1.35 | 282000 | 1.6704 |
| 1.8047 | 1.36 | 284000 | 1.6811 |
| 1.7978 | 1.37 | 286000 | 1.6848 |
| 1.7997 | 1.38 | 288000 | 1.6698 |
| 1.7997 | 1.38 | 290000 | 1.6820 |
| 1.7945 | 1.39 | 292000 | 1.6963 |
| 1.7958 | 1.4 | 294000 | 1.6922 |
| 1.7923 | 1.41 | 296000 | 1.6577 |
| 1.7975 | 1.42 | 298000 | 1.6621 |
| 1.7914 | 1.43 | 300000 | 1.6804 |
| 1.7944 | 1.44 | 302000 | 1.6953 |
| 1.7927 | 1.45 | 304000 | 1.6846 |
| 1.789 | 1.46 | 306000 | 1.6889 |
| 1.7851 | 1.47 | 308000 | 1.6652 |
| 1.7902 | 1.48 | 310000 | 1.6823 |
| 1.7873 | 1.49 | 312000 | 1.6603 |
| 1.7868 | 1.5 | 314000 | 1.6766 |
| 1.7856 | 1.51 | 316000 | 1.6717 |
| 1.7807 | 1.52 | 318000 | 1.6466 |
| 1.7767 | 1.53 | 320000 | 1.6639 |
| 1.7782 | 1.54 | 322000 | 1.6678 |
| 1.7762 | 1.55 | 324000 | 1.6853 |
| 1.7746 | 1.56 | 326000 | 1.6785 |
| 1.7746 | 1.57 | 328000 | 1.6777 |
| 1.7716 | 1.58 | 330000 | 1.6784 |
| 1.7699 | 1.59 | 332000 | 1.6648 |
| 1.7739 | 1.59 | 334000 | 1.6725 |
| 1.7703 | 1.6 | 336000 | 1.6915 |
| 1.7707 | 1.61 | 338000 | 1.6858 |
| 1.7619 | 1.62 | 340000 | 1.6624 |
| 1.7652 | 1.63 | 342000 | 1.6797 |
| 1.7626 | 1.64 | 344000 | 1.6728 |
| 1.7647 | 1.65 | 346000 | 1.6580 |
| 1.7616 | 1.66 | 348000 | 1.6679 |
| 1.7616 | 1.67 | 350000 | 1.6470 |
| 1.7611 | 1.68 | 352000 | 1.6489 |
| 1.759 | 1.69 | 354000 | 1.6603 |
| 1.7604 | 1.7 | 356000 | 1.6532 |
| 1.7599 | 1.71 | 358000 | 1.6477 |
| 1.7529 | 1.72 | 360000 | 1.6322 |
| 1.7596 | 1.73 | 362000 | 1.6447 |
| 1.7508 | 1.74 | 364000 | 1.6509 |
| 1.7533 | 1.75 | 366000 | 1.6465 |
| 1.755 | 1.76 | 368000 | 1.6485 |
| 1.7473 | 1.77 | 370000 | 1.6493 |
| 1.7435 | 1.78 | 372000 | 1.6542 |
| 1.7483 | 1.79 | 374000 | 1.6573 |
| 1.7475 | 1.8 | 376000 | 1.6626 |
| 1.7439 | 1.8 | 378000 | 1.6366 |
| 1.7417 | 1.81 | 380000 | 1.6312 |
| 1.7387 | 1.82 | 382000 | 1.6424 |
| 1.7415 | 1.83 | 384000 | 1.6468 |
| 1.7409 | 1.84 | 386000 | 1.6528 |
| 1.7362 | 1.85 | 388000 | 1.6394 |
| 1.7372 | 1.86 | 390000 | 1.6581 |
| 1.7347 | 1.87 | 392000 | 1.6546 |
| 1.7368 | 1.88 | 394000 | 1.6468 |
| 1.7302 | 1.89 | 396000 | 1.6450 |
| 1.7317 | 1.9 | 398000 | 1.6368 |
| 1.7306 | 1.91 | 400000 | 1.6399 |
| 1.7304 | 1.92 | 402000 | 1.6180 |
| 1.726 | 1.93 | 404000 | 1.6212 |
| 1.7271 | 1.94 | 406000 | 1.6302 |
| 1.7312 | 1.95 | 408000 | 1.6264 |
| 1.7249 | 1.96 | 410000 | 1.6584 |
| 1.7226 | 1.97 | 412000 | 1.6514 |
| 1.7214 | 1.98 | 414000 | 1.6516 |
| 1.7228 | 1.99 | 416000 | 1.6346 |
| 1.7205 | 2.0 | 418000 | 1.6370 |
| 1.7041 | 2.01 | 420000 | 1.6021 |
| 1.691 | 2.02 | 422000 | 1.6385 |
| 1.6896 | 2.02 | 424000 | 1.6280 |
| 1.6882 | 2.03 | 426000 | 1.6295 |
| 1.6889 | 2.04 | 428000 | 1.6445 |
| 1.6904 | 2.05 | 430000 | 1.6558 |
| 1.6933 | 2.06 | 432000 | 1.6164 |
| 1.6916 | 2.07 | 434000 | 1.6011 |
| 1.6873 | 2.08 | 436000 | 1.6199 |
| 1.6903 | 2.09 | 438000 | 1.6300 |
| 1.6859 | 2.1 | 440000 | 1.6104 |
| 1.6901 | 2.11 | 442000 | 1.6248 |
| 1.6884 | 2.12 | 444000 | 1.6251 |
| 1.6859 | 2.13 | 446000 | 1.6145 |
| 1.6906 | 2.14 | 448000 | 1.6181 |
| 1.6859 | 2.15 | 450000 | 1.6264 |
| 1.6814 | 2.16 | 452000 | 1.6069 |
| 1.6853 | 2.17 | 454000 | 1.6089 |
| 1.6881 | 2.18 | 456000 | 1.6102 |
| 1.6869 | 2.19 | 458000 | 1.6327 |
| 1.6827 | 2.2 | 460000 | 1.6069 |
| 1.6813 | 2.21 | 462000 | 1.6278 |
| 1.6806 | 2.22 | 464000 | 1.6176 |
| 1.6763 | 2.23 | 466000 | 1.6180 |
| 1.68 | 2.23 | 468000 | 1.6226 |
| 1.6816 | 2.24 | 470000 | 1.6071 |
| 1.6845 | 2.25 | 472000 | 1.6178 |
| 1.6764 | 2.26 | 474000 | 1.6073 |
| 1.682 | 2.27 | 476000 | 1.5966 |
| 1.6727 | 2.28 | 478000 | 1.5979 |
| 1.6718 | 2.29 | 480000 | 1.6109 |
| 1.6764 | 2.3 | 482000 | 1.6034 |
| 1.671 | 2.31 | 484000 | 1.6001 |
| 1.6691 | 2.32 | 486000 | 1.6148 |
| 1.6706 | 2.33 | 488000 | 1.6003 |
| 1.6705 | 2.34 | 490000 | 1.6021 |
| 1.6699 | 2.35 | 492000 | 1.5940 |
| 1.6708 | 2.36 | 494000 | 1.6077 |
| 1.6715 | 2.37 | 496000 | 1.6188 |
| 1.6672 | 2.38 | 498000 | 1.5903 |
| 1.6638 | 2.39 | 500000 | 1.6042 |
| 1.6634 | 2.4 | 502000 | 1.5967 |
| 1.6669 | 2.41 | 504000 | 1.5904 |
| 1.6643 | 2.42 | 506000 | 1.6071 |
| 1.6606 | 2.43 | 508000 | 1.6065 |
| 1.6573 | 2.44 | 510000 | 1.6010 |
| 1.6603 | 2.44 | 512000 | 1.5801 |
| 1.6568 | 2.45 | 514000 | 1.5961 |
| 1.6564 | 2.46 | 516000 | 1.6020 |
| 1.6596 | 2.47 | 518000 | 1.5952 |
| 1.6567 | 2.48 | 520000 | 1.5760 |
| 1.6536 | 2.49 | 522000 | 1.5697 |
| 1.6564 | 2.5 | 524000 | 1.5664 |
| 1.652 | 2.51 | 526000 | 1.5616 |
| 1.653 | 2.52 | 528000 | 1.5738 |
| 1.6525 | 2.53 | 530000 | 1.5754 |
| 1.65 | 2.54 | 532000 | 1.5749 |
| 1.6519 | 2.55 | 534000 | 1.5788 |
| 1.6515 | 2.56 | 536000 | 1.5953 |
| 1.6492 | 2.57 | 538000 | 1.5836 |
| 1.6473 | 2.58 | 540000 | 1.5896 |
| 1.6452 | 2.59 | 542000 | 1.5858 |
| 1.6464 | 2.6 | 544000 | 1.5760 |
| 1.6445 | 2.61 | 546000 | 1.5683 |
| 1.6457 | 2.62 | 548000 | 1.5823 |
| 1.6417 | 2.63 | 550000 | 1.5780 |
| 1.6407 | 2.64 | 552000 | 1.5715 |
| 1.6368 | 2.65 | 554000 | 1.5618 |
| 1.6357 | 2.65 | 556000 | 1.5725 |
| 1.6446 | 2.66 | 558000 | 1.5744 |
| 1.634 | 2.67 | 560000 | 1.5360 |
| 1.6351 | 2.68 | 562000 | 1.5599 |
| 1.6362 | 2.69 | 564000 | 1.5607 |
| 1.637 | 2.7 | 566000 | 1.5561 |
| 1.6324 | 2.71 | 568000 | 1.5591 |
| 1.6325 | 2.72 | 570000 | 1.5527 |
| 1.6323 | 2.73 | 572000 | 1.5537 |
| 1.629 | 2.74 | 574000 | 1.5673 |
| 1.627 | 2.75 | 576000 | 1.5509 |
| 1.6279 | 2.76 | 578000 | 1.5507 |
| 1.6291 | 2.77 | 580000 | 1.5304 |
| 1.625 | 2.78 | 582000 | 1.5540 |
| 1.6246 | 2.79 | 584000 | 1.5530 |
| 1.6228 | 2.8 | 586000 | 1.5570 |
| 1.6241 | 2.81 | 588000 | 1.5586 |
| 1.6224 | 2.82 | 590000 | 1.5480 |
| 1.6264 | 2.83 | 592000 | 1.5624 |
| 1.6214 | 2.84 | 594000 | 1.5565 |
| 1.6187 | 2.85 | 596000 | 1.5397 |
| 1.6191 | 2.86 | 598000 | 1.5520 |
| 1.6192 | 2.87 | 600000 | 1.5494 |
| 1.6182 | 2.87 | 602000 | 1.5608 |
| 1.6164 | 2.88 | 604000 | 1.5428 |
| 1.6107 | 2.89 | 606000 | 1.5525 |
| 1.614 | 2.9 | 608000 | 1.5277 |
| 1.6158 | 2.91 | 610000 | 1.5502 |
| 1.6082 | 2.92 | 612000 | 1.5452 |
| 1.6089 | 2.93 | 614000 | 1.5400 |
| 1.6112 | 2.94 | 616000 | 1.5322 |
| 1.6069 | 2.95 | 618000 | 1.5394 |
| 1.6111 | 2.96 | 620000 | 1.5537 |
| 1.6038 | 2.97 | 622000 | 1.5486 |
| 1.6073 | 2.98 | 624000 | 1.5551 |
| 1.6046 | 2.99 | 626000 | 1.5386 |
| 1.6051 | 3.0 | 628000 | 1.5369 |
| 1.5672 | 3.01 | 630000 | 1.5361 |
| 1.5694 | 3.02 | 632000 | 1.5390 |
| 1.5692 | 3.03 | 634000 | 1.5386 |
| 1.5651 | 3.04 | 636000 | 1.5456 |
| 1.5724 | 3.05 | 638000 | 1.5419 |
| 1.5708 | 3.06 | 640000 | 1.5363 |
| 1.5665 | 3.07 | 642000 | 1.5446 |
| 1.5706 | 3.08 | 644000 | 1.5331 |
| 1.5679 | 3.08 | 646000 | 1.5449 |
| 1.5678 | 3.09 | 648000 | 1.5436 |
| 1.5676 | 3.1 | 650000 | 1.5309 |
| 1.5657 | 3.11 | 652000 | 1.5334 |
| 1.5697 | 3.12 | 654000 | 1.5303 |
| 1.5617 | 3.13 | 656000 | 1.5380 |
| 1.5675 | 3.14 | 658000 | 1.5404 |
| 1.5612 | 3.15 | 660000 | 1.5258 |
| 1.5639 | 3.16 | 662000 | 1.5329 |
| 1.567 | 3.17 | 664000 | 1.5418 |
| 1.5619 | 3.18 | 666000 | 1.5314 |
| 1.5637 | 3.19 | 668000 | 1.5201 |
| 1.5608 | 3.2 | 670000 | 1.5181 |
| 1.5641 | 3.21 | 672000 | 1.5290 |
| 1.5626 | 3.22 | 674000 | 1.5180 |
| 1.5605 | 3.23 | 676000 | 1.5156 |
| 1.5566 | 3.24 | 678000 | 1.5266 |
| 1.5587 | 3.25 | 680000 | 1.5286 |
| 1.5602 | 3.26 | 682000 | 1.5265 |
| 1.5535 | 3.27 | 684000 | 1.5354 |
| 1.5589 | 3.28 | 686000 | 1.5265 |
| 1.5569 | 3.29 | 688000 | 1.5346 |
| 1.559 | 3.29 | 690000 | 1.5306 |
| 1.5507 | 3.3 | 692000 | 1.5359 |
| 1.5547 | 3.31 | 694000 | 1.5264 |
| 1.5498 | 3.32 | 696000 | 1.5264 |
| 1.5559 | 3.33 | 698000 | 1.5273 |
| 1.553 | 3.34 | 700000 | 1.5137 |
| 1.5503 | 3.35 | 702000 | 1.5143 |
| 1.5498 | 3.36 | 704000 | 1.5263 |
| 1.5516 | 3.37 | 706000 | 1.5096 |
| 1.5461 | 3.38 | 708000 | 1.5112 |
| 1.5489 | 3.39 | 710000 | 1.5094 |
| 1.5451 | 3.4 | 712000 | 1.5079 |
| 1.544 | 3.41 | 714000 | 1.5058 |
| 1.5446 | 3.42 | 716000 | 1.5005 |
| 1.5417 | 3.43 | 718000 | 1.4972 |
| 1.5469 | 3.44 | 720000 | 1.5043 |
| 1.5407 | 3.45 | 722000 | 1.5041 |
| 1.5484 | 3.46 | 724000 | 1.5104 |
| 1.5409 | 3.47 | 726000 | 1.5087 |
| 1.5431 | 3.48 | 728000 | 1.5114 |
| 1.5393 | 3.49 | 730000 | 1.5102 |
| 1.5364 | 3.5 | 732000 | 1.5143 |
| 1.5403 | 3.5 | 734000 | 1.5202 |
| 1.5386 | 3.51 | 736000 | 1.5143 |
| 1.5381 | 3.52 | 738000 | 1.5198 |
| 1.5341 | 3.53 | 740000 | 1.5136 |
| 1.5344 | 3.54 | 742000 | 1.5172 |
| 1.5347 | 3.55 | 744000 | 1.5149 |
| 1.5292 | 3.56 | 746000 | 1.5141 |
| 1.5344 | 3.57 | 748000 | 1.5066 |
| 1.5307 | 3.58 | 750000 | 1.5087 |
| 1.5324 | 3.59 | 752000 | 1.5113 |
| 1.5273 | 3.6 | 754000 | 1.5101 |
| 1.5273 | 3.61 | 756000 | 1.4975 |
| 1.5282 | 3.62 | 758000 | 1.5053 |
| 1.5252 | 3.63 | 760000 | 1.4998 |
| 1.525 | 3.64 | 762000 | 1.5020 |
| 1.5297 | 3.65 | 764000 | 1.5075 |
| 1.5215 | 3.66 | 766000 | 1.4980 |
| 1.5237 | 3.67 | 768000 | 1.5066 |
| 1.5248 | 3.68 | 770000 | 1.5093 |
| 1.5231 | 3.69 | 772000 | 1.5090 |
| 1.5224 | 3.7 | 774000 | 1.5093 |
| 1.526 | 3.71 | 776000 | 1.5015 |
| 1.5215 | 3.71 | 778000 | 1.5045 |
| 1.5231 | 3.72 | 780000 | 1.4971 |
| 1.5205 | 3.73 | 782000 | 1.4987 |
| 1.5171 | 3.74 | 784000 | 1.5001 |
| 1.5134 | 3.75 | 786000 | 1.4951 |
| 1.5155 | 3.76 | 788000 | 1.4975 |
| 1.5154 | 3.77 | 790000 | 1.4928 |
| 1.5167 | 3.78 | 792000 | 1.4983 |
| 1.5146 | 3.79 | 794000 | 1.4938 |
| 1.5138 | 3.8 | 796000 | 1.4985 |
| 1.5137 | 3.81 | 798000 | 1.5021 |
| 1.5111 | 3.82 | 800000 | 1.5020 |
| 1.5134 | 3.83 | 802000 | 1.4998 |
| 1.5086 | 3.84 | 804000 | 1.5001 |
| 1.5081 | 3.85 | 806000 | 1.5031 |
| 1.5097 | 3.86 | 808000 | 1.5008 |
| 1.5128 | 3.87 | 810000 | 1.4990 |
| 1.5093 | 3.88 | 812000 | 1.4994 |
| 1.5109 | 3.89 | 814000 | 1.5021 |
| 1.5049 | 3.9 | 816000 | 1.5012 |
| 1.5042 | 3.91 | 818000 | 1.5013 |
| 1.5053 | 3.92 | 820000 | 1.4946 |
| 1.5066 | 3.93 | 822000 | 1.4984 |
| 1.5074 | 3.93 | 824000 | 1.4963 |
| 1.5046 | 3.94 | 826000 | 1.4972 |
| 1.5043 | 3.95 | 828000 | 1.4970 |
| 1.5064 | 3.96 | 830000 | 1.4940 |
| 1.4999 | 3.97 | 832000 | 1.4940 |
| 1.5022 | 3.98 | 834000 | 1.4934 |
| 1.5054 | 3.99 | 836000 | 1.4929 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.