modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Batsy24/DialoGPT-small-Twilight_EdBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
x0:
- 5.8
- 6.0
- 5.5
x1:
- 2.8
- 2.2
- 4.2
x2:
- 5.1
- 4.0
- 1.4
x3:
- 2.4
- 1.0
- 0.2
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| bootstrap | True |
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | sqrt |
| max_leaf_nodes | |
| max_samples | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| n_estimators | 100 |
| n_jobs | |
| oob_score | False |
| random_state | |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>RandomForestClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestClassifier</label><div class="sk-toggleable__content"><pre>RandomForestClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
[More Information Needed]
```
</details>
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
``` |
Baybars/wav2vec2-xls-r-1b-turkish | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | <div class="overflow-hidden">
<span class="absolute" style="top: -260px;left: 0;color: red;">boo</span>
</div>
# Test
<style>
img {
display: inline;
}
a {
color: red !important;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
| [](#deployment-with-nvidia-riva) |
|
BeIR/query-gen-msmarco-t5-large-v1 | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 1,225 | 2022-12-12T11:51:31Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cpu
- Datasets 2.7.1
- Tokenizers 0.12.1
|
Benicio/t5-small-finetuned-en-to-ro | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 16.4316 | 0.19 | 500 | 9.0685 |
| 8.2958 | 0.39 | 1000 | 7.6483 |
| 7.4324 | 0.58 | 1500 | 7.1707 |
| 7.0054 | 0.77 | 2000 | 6.8592 |
| 6.8522 | 0.97 | 2500 | 6.7710 |
| 6.7538 | 1.16 | 3000 | 6.5845 |
| 6.634 | 1.36 | 3500 | 6.4525 |
| 6.5784 | 1.55 | 4000 | 6.3129 |
| 6.5135 | 1.74 | 4500 | 6.3312 |
| 6.4552 | 1.94 | 5000 | 6.2546 |
| 6.4685 | 2.13 | 5500 | 6.2857 |
| 6.4356 | 2.32 | 6000 | 6.2285 |
| 6.3566 | 2.52 | 6500 | 6.2295 |
| 6.394 | 2.71 | 7000 | 6.1790 |
| 6.3412 | 2.9 | 7500 | 6.1880 |
| 6.3115 | 3.1 | 8000 | 6.2130 |
| 6.3163 | 3.29 | 8500 | 6.1831 |
| 6.2978 | 3.49 | 9000 | 6.1945 |
| 6.3082 | 3.68 | 9500 | 6.1485 |
| 6.2729 | 3.87 | 10000 | 6.1752 |
| 6.307 | 4.07 | 10500 | 6.1331 |
| 6.2494 | 4.26 | 11000 | 6.1082 |
| 6.2523 | 4.45 | 11500 | 6.2110 |
| 6.2455 | 4.65 | 12000 | 6.1326 |
| 6.2399 | 4.84 | 12500 | 6.1779 |
| 6.2297 | 5.03 | 13000 | 6.1587 |
| 6.2374 | 5.23 | 13500 | 6.1458 |
| 6.2265 | 5.42 | 14000 | 6.1370 |
| 6.2222 | 5.62 | 14500 | 6.1511 |
| 6.2209 | 5.81 | 15000 | 6.1320 |
| 6.2146 | 6.0 | 15500 | 6.1124 |
| 6.214 | 6.2 | 16000 | 6.1439 |
| 6.1907 | 6.39 | 16500 | 6.0981 |
| 6.2119 | 6.58 | 17000 | 6.1465 |
| 6.1858 | 6.78 | 17500 | 6.1594 |
| 6.1552 | 6.97 | 18000 | 6.0742 |
| 6.1926 | 7.16 | 18500 | 6.1176 |
| 6.1813 | 7.36 | 19000 | 6.0107 |
| 6.1812 | 7.55 | 19500 | 6.0852 |
| 6.1852 | 7.75 | 20000 | 6.0845 |
| 6.1945 | 7.94 | 20500 | 6.1260 |
| 6.1542 | 8.13 | 21000 | 6.1032 |
| 6.1685 | 8.33 | 21500 | 6.0650 |
| 6.1619 | 8.52 | 22000 | 6.1028 |
| 6.1279 | 8.71 | 22500 | 6.1269 |
| 6.1575 | 8.91 | 23000 | 6.0793 |
| 6.1401 | 9.1 | 23500 | 6.1479 |
| 6.159 | 9.3 | 24000 | 6.0319 |
| 6.1227 | 9.49 | 24500 | 6.0677 |
| 6.1201 | 9.68 | 25000 | 6.0527 |
| 6.1473 | 9.88 | 25500 | 6.1305 |
| 6.1539 | 10.07 | 26000 | 6.1079 |
| 6.091 | 10.26 | 26500 | 6.1219 |
| 6.1015 | 10.46 | 27000 | 6.1317 |
| 6.1048 | 10.65 | 27500 | 6.1149 |
| 6.0955 | 10.84 | 28000 | 6.1216 |
| 6.129 | 11.04 | 28500 | 6.0427 |
| 6.1007 | 11.23 | 29000 | 6.1289 |
| 6.1266 | 11.43 | 29500 | 6.0564 |
| 6.1203 | 11.62 | 30000 | 6.1143 |
| 6.1038 | 11.81 | 30500 | 6.0957 |
| 6.0989 | 12.01 | 31000 | 6.0707 |
| 6.0571 | 12.2 | 31500 | 6.0013 |
| 6.1017 | 12.39 | 32000 | 6.1356 |
| 6.0649 | 12.59 | 32500 | 6.0981 |
| 6.0704 | 12.78 | 33000 | 6.0588 |
| 6.088 | 12.97 | 33500 | 6.0796 |
| 6.1112 | 13.17 | 34000 | 6.0809 |
| 6.0888 | 13.36 | 34500 | 6.0776 |
| 6.0482 | 13.56 | 35000 | 6.0710 |
| 6.0588 | 13.75 | 35500 | 6.0877 |
| 6.0517 | 13.94 | 36000 | 6.0650 |
| 6.0832 | 14.14 | 36500 | 5.9890 |
| 6.0655 | 14.33 | 37000 | 6.0445 |
| 6.0705 | 14.52 | 37500 | 6.0037 |
| 6.0789 | 14.72 | 38000 | 6.0777 |
| 6.0645 | 14.91 | 38500 | 6.0475 |
| 6.0347 | 15.1 | 39000 | 6.1148 |
| 6.0478 | 15.3 | 39500 | 6.0639 |
| 6.0638 | 15.49 | 40000 | 6.0373 |
| 6.0377 | 15.69 | 40500 | 6.0116 |
| 6.0402 | 15.88 | 41000 | 6.0483 |
| 6.0382 | 16.07 | 41500 | 6.1025 |
| 6.039 | 16.27 | 42000 | 6.0488 |
| 6.0232 | 16.46 | 42500 | 6.0219 |
| 5.9946 | 16.65 | 43000 | 6.0541 |
| 6.063 | 16.85 | 43500 | 6.0436 |
| 6.0141 | 17.04 | 44000 | 6.0609 |
| 6.0196 | 17.23 | 44500 | 6.0551 |
| 6.0331 | 17.43 | 45000 | 6.0576 |
| 6.0174 | 17.62 | 45500 | 6.0498 |
| 6.0366 | 17.82 | 46000 | 6.0782 |
| 6.0299 | 18.01 | 46500 | 6.0196 |
| 6.0009 | 18.2 | 47000 | 6.0262 |
| 5.9758 | 18.4 | 47500 | 6.0824 |
| 6.0285 | 18.59 | 48000 | 6.0799 |
| 6.025 | 18.78 | 48500 | 5.9511 |
| 5.9806 | 18.98 | 49000 | 6.0086 |
| 5.9915 | 19.17 | 49500 | 6.0089 |
| 5.9957 | 19.36 | 50000 | 6.0330 |
| 6.0311 | 19.56 | 50500 | 6.0083 |
| 5.995 | 19.75 | 51000 | 6.0394 |
| 6.0034 | 19.95 | 51500 | 5.9854 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
BigSalmon/GPTHeHe | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: 1.3b-all-2-epoch-v1-after-book
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.06395348837209303
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1.3b-all-2-epoch-v1-after-book
This model is a fine-tuned version of [/models/1.3b-dalio-principles-book](https://huggingface.co//models/1.3b-dalio-principles-book) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9482
- Accuracy: 0.0640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.17 | 0.07 | 1 | 2.0547 | 0.0621 |
| 2.1814 | 0.13 | 2 | 2.0547 | 0.0621 |
| 2.0963 | 0.2 | 3 | 2.0234 | 0.0625 |
| 2.1383 | 0.27 | 4 | 2.0195 | 0.0625 |
| 2.1625 | 0.33 | 5 | 2.0195 | 0.0625 |
| 2.1808 | 0.4 | 6 | 2.0156 | 0.0624 |
| 2.1587 | 0.47 | 7 | 2.0176 | 0.0626 |
| 2.0847 | 0.53 | 8 | 2.0137 | 0.0627 |
| 2.0336 | 0.6 | 9 | 2.0137 | 0.0627 |
| 2.1777 | 0.67 | 10 | 2.0059 | 0.0629 |
| 2.2034 | 0.73 | 11 | 2.0 | 0.0630 |
| 2.1665 | 0.8 | 12 | 1.9941 | 0.0628 |
| 2.0352 | 0.87 | 13 | 1.9883 | 0.0629 |
| 2.1263 | 0.93 | 14 | 1.9834 | 0.0628 |
| 2.1282 | 1.0 | 15 | 1.9785 | 0.0632 |
| 1.7159 | 1.07 | 16 | 1.9766 | 0.0633 |
| 1.8346 | 1.13 | 17 | 1.9775 | 0.0635 |
| 1.7183 | 1.2 | 18 | 1.9824 | 0.0634 |
| 1.6086 | 1.27 | 19 | 1.9883 | 0.0635 |
| 1.6497 | 1.33 | 20 | 1.9893 | 0.0634 |
| 1.6267 | 1.4 | 21 | 1.9854 | 0.0637 |
| 1.5962 | 1.47 | 22 | 1.9766 | 0.0637 |
| 1.5168 | 1.53 | 23 | 1.9697 | 0.0637 |
| 1.6213 | 1.6 | 24 | 1.9619 | 0.0637 |
| 1.4789 | 1.67 | 25 | 1.9580 | 0.0638 |
| 1.6796 | 1.73 | 26 | 1.9551 | 0.0638 |
| 1.5964 | 1.8 | 27 | 1.9531 | 0.0638 |
| 1.787 | 1.87 | 28 | 1.9512 | 0.0639 |
| 1.6536 | 1.93 | 29 | 1.9492 | 0.0640 |
| 1.7178 | 2.0 | 30 | 1.9482 | 0.0640 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigSalmon/GPTNeo350MInformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Olusegun/autotrain-data-disease_tokens
co2_eq_emissions:
emissions: 1.569698418187329
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2095367455
- CO2 Emissions (in grams): 1.5697
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Olusegun/autotrain-disease_tokens-2095367455
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Olusegun/autotrain-disease_tokens-2095367455", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Olusegun/autotrain-disease_tokens-2095367455", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BigSalmon/InformalToFormalLincoln21 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0972 | 1.0 | 291 | 1.7066 |
| 1.6391 | 2.0 | 582 | 1.4318 |
| 1.4844 | 3.0 | 873 | 1.3734 |
| 1.3997 | 4.0 | 1164 | 1.3806 |
| 1.3398 | 5.0 | 1455 | 1.1957 |
| 1.2846 | 6.0 | 1746 | 1.2837 |
| 1.2379 | 7.0 | 2037 | 1.2665 |
| 1.1969 | 8.0 | 2328 | 1.2154 |
| 1.1651 | 9.0 | 2619 | 1.1756 |
| 1.1415 | 10.0 | 2910 | 1.2114 |
| 1.1296 | 11.0 | 3201 | 1.2138 |
| 1.1047 | 12.0 | 3492 | 1.1655 |
| 1.0802 | 13.0 | 3783 | 1.2566 |
| 1.0775 | 14.0 | 4074 | 1.1650 |
| 1.0645 | 15.0 | 4365 | 1.1294 |
| 1.062 | 16.0 | 4656 | 1.2480 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
BigSalmon/NEO125InformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-11-15T20:34:03Z | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
[*Click here to download the latest Double Exposure embedding for SD 2.x in higher resolution*](https://huggingface.co/joachimsallstrom/Double-Exposure-Embedding)!
**Double Exposure Diffusion**
This is version 2 of the <i>Double Exposure Diffusion</i> model, trained specifically on images of people and a few animals.
The model file (Double_Exposure_v2.ckpt) can be downloaded on the **Files** page. You trigger double exposure style images using token: **_dublex style_** or just **_dublex_**.
**Example 1:**

#### Example prompts and settings
<i>Galaxy man (image 1):</i><br>
**dublex man galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3273014177, Size: 512x512_
<i>Emma Stone (image 2):</i><br>
**dublex style Emma Stone, galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 250257155, Size: 512x512_
<i>Frodo (image 6):</i><br>
**dublex style young Elijah Wood as (Frodo), portrait, dark nature**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3717002975, Size: 512x512_
<br>
**Example 2:**

#### Example prompts and settings
<i>Scarlett Johansson (image 1):</i><br>
**dublex Scarlett Johansson, (haunted house), black background**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3059560186, Size: 512x512_
<i>Frozen Elsa (image 3):</i><br>
**dublex style Elsa, ice castle**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2867934627, Size: 512x512_
<i>Wolf (image 4):</i><br>
**dublex style wolf closeup, moon**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 312924946, Size: 512x512_
<br>
<p>
This model was trained using Shivam's DreamBooth model on Google Colab @ 2000 steps.
</p>
The previous version 1 of Double Exposure Diffusion is also available in the **Files** section.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Bman/DialoGPT-medium-harrypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-tommasory
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8308823529411765
- name: F1
type: f1
value: 0.8733944954128441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-tommasory
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7098
- Accuracy: 0.8309
- F1: 0.8734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5196 | 1.09 | 500 | 0.5289 | 0.8260 | 0.8739 |
| 0.3407 | 2.18 | 1000 | 0.7098 | 0.8309 | 0.8734 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
BossLee/t5-gec | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 6 | null | depressed man sittin on a bar drinking whisky and smoke a cigarrette |
Botslity/Bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt2-witcherbooks-clm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt2-witcherbooks-clm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Branex/gpt-neo-2.7B | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### Player on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by Laughify
This your the Stable Diffusion model fine-tuned the Player concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: ****
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
|
Brayan/CNN_Brain_Tumor | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
This is a first attempt at following the directions from the huggingface course. It was run on colab and a private server
## Intended uses & limitations
This model is fine-tuned for extractive question answering.
## Training and evaluation data
SQuAD
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Brona/poc_de | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245878206545592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8516 | 1.0 | 250 | 0.3235 | 0.9055 | 0.9024 |
| 0.2547 | 2.0 | 500 | 0.2259 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 62 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# neonHorror
This is a Stable Diffusion model about horror illustrations with a little bit of neon lights.
Some recomendations: the magic word for your prompts is neonHorror .In some times, you would put some prompts like:
request, in neonHorror style
or
an illustration of request, in neonHorror style
or
neonHorror, request
PS: you can replace 'request' with a person, character, etc.
If you enjoy my work, please consider supporting me:
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/qFz4YCE.png width=30% height=30%>
<img src=https://imgur.com/H3zsCIP.png width=30% height=30%>
<img src=https://imgur.com/KcgTQEE.png width=30% height=30%>
<img src=https://imgur.com/5p6sUQk.png width=30% height=30%>
<img src=https://imgur.com/U1rpAQq.png width=30% height=30%>
<img src=https://imgur.com/lfHCbiV.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-multilingual-cased-finetuned-squad-squadv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-squad-squadv
This model is a fine-tuned version of [monakth/bert-base-multilingual-cased-finetuned-squad](https://huggingface.co/monakth/bert-base-multilingual-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: farsi_lastname_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0436
- Pearson: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 12 | 0.2989 | 0.6985 |
| No log | 2.0 | 24 | 0.1378 | 0.7269 |
| No log | 3.0 | 36 | 0.0459 | 0.9122 |
| No log | 4.0 | 48 | 0.0454 | 0.9304 |
| No log | 5.0 | 60 | 0.0564 | 0.9168 |
| No log | 6.0 | 72 | 0.0434 | 0.9315 |
| No log | 7.0 | 84 | 0.0452 | 0.9254 |
| No log | 8.0 | 96 | 0.0381 | 0.9320 |
| No log | 9.0 | 108 | 0.0441 | 0.9327 |
| No log | 10.0 | 120 | 0.0436 | 0.9325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,967 | null | ---
tags:
- conversational
---
#RedBot made from DialoGPT-medium |
CLTL/icf-levels-adm | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: farsi_lastname_classifier_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier_1
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0482
- Pearson: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 12 | 0.2705 | 0.7018 |
| No log | 2.0 | 24 | 0.0993 | 0.7986 |
| No log | 3.0 | 36 | 0.0804 | 0.8347 |
| No log | 4.0 | 48 | 0.0433 | 0.9246 |
| No log | 5.0 | 60 | 0.0559 | 0.9176 |
| No log | 6.0 | 72 | 0.0465 | 0.9334 |
| No log | 7.0 | 84 | 0.0503 | 0.9154 |
| No log | 8.0 | 96 | 0.0438 | 0.9222 |
| No log | 9.0 | 108 | 0.0468 | 0.9260 |
| No log | 10.0 | 120 | 0.0482 | 0.9232 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
CalvinHuang/mt5-small-finetuned-amazon-en-es | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| summarization | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
license: mit
---
### qingqingdezhaopian on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by liuwei33
This your the Stable Diffusion model fine-tuned the qingqingdezhaopian concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **15.png**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
15.png

|
Cameron/BERT-mdgender-convai-binary | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: creativeml-openrail-m
---
portrait of a beautiful woman in the style of PatrickNagel







 |
Capreolus/electra-base-msmarco | [
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"transformers"
]
| text-classification | {
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 110 | null | ---
license: creativeml-openrail-m
---
This model tries to mimick the stylized 3d look but with a realistic twist on texture and overall materials rendition.
Use "tdst style" (without quotes) to activate the model
As usual, if you want a better likeness with your subject you can either use brackets like in: [3dst style:10] or give more emphasis to the subject like in: (subject:1.3) |
dccuchile/albert-base-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mT5_multilingual_XLSum-finetuned-liputan6-coba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-liputan6-coba
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2713
- Rouge1: 0.3371
- Rouge2: 0.2029
- Rougel: 0.2927
- Rougelsum: 0.309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.4304 | 1.0 | 4474 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
| 1.4286 | 2.0 | 8948 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
| 1.429 | 3.0 | 13422 | 1.2713 | 0.3371 | 0.2029 | 0.2927 | 0.309 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dccuchile/albert-large-spanish-finetuned-pawsx | [
"pytorch",
"albert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8422468886646486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2661
- F1: 0.8422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5955 | 1.0 | 191 | 0.3344 | 0.7932 |
| 0.2556 | 2.0 | 382 | 0.2923 | 0.8252 |
| 0.1741 | 3.0 | 573 | 0.2661 | 0.8422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dccuchile/albert-tiny-spanish-finetuned-ner | [
"pytorch",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5791 | 1.0 | 554 | 2.2242 |
| 2.0656 | 2.0 | 1108 | 1.8537 |
| 1.6831 | 3.0 | 1662 | 1.7848 |
| 1.4963 | 4.0 | 2216 | 1.7518 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0843 | 1.0 | 2406 | 1.9226 |
| 1.9913 | 2.0 | 4812 | 1.8820 |
| 1.9597 | 3.0 | 7218 | 1.8214 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 115.65 +/- 116.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Yujun1of1/concrete-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Yujun1of1/concrete-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2256
- Validation Loss: 2.6946
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2256 | 2.6946 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 39 | null | Access to model ThiennNguyen/vi-sbert-QA is restricted and you are not in the authorized list. Visit https://huggingface.co/ThiennNguyen/vi-sbert-QA to ask for access. |
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Mohan515/t5-small-finetuned-medical
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mohan515/t5-small-finetuned-medical
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8018
- Validation Loss: 0.5835
- Train Rouge1: 43.3783
- Train Rouge2: 35.1091
- Train Rougel: 41.6332
- Train Rougelsum: 42.5743
- Train Gen Len: 17.4718
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.8018 | 0.5835 | 43.3783 | 35.1091 | 41.6332 | 42.5743 | 17.4718 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-text-classification-template
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-text-classification-template
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6637
- F1: 0.5
- Roc Auc: 0.6667
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:|
| No log | 1.0 | 6 | 0.6637 | 0.5 | 0.6667 | 0.3333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
language: zh
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- bilingual
- zh
- Chinese
- en
- English
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
---
# AltDiffusion
| 名称 Name | 任务 Task | 语言 Language(s) | 模型 Model | Github |
|:----------:| :----: |:-------------------:| :----: |:------:|
| AltDiffusion | 多模态 Multimodal | 中英文 Chinese&English | Stable Diffusion | [FlagAI](https://github.com/FlagAI-Open/FlagAI) |
## Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run AltDiffusion:
[](https://huggingface.co/spaces/BAAI/bilingual_stable_diffusion)
# 模型信息 Model Information
我们使用 [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md),基于 [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) 训练了双语Diffusion模型,训练数据来自 [WuDao数据集](https://data.baai.ac.cn/details/WuDaoCorporaText) 和 [LAION](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus) 。
我们的版本在中英文对齐方面表现非常出色,是目前市面上开源的最强版本,保留了原版stable diffusion的大部分能力,并且在某些例子上比有着比原版模型更出色的能力。
AltDiffusion 模型由名为 AltCLIP 的双语 CLIP 模型支持,该模型也可在本项目中访问。您可以阅读 [此教程](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) 了解更多信息。
AltDiffusion支持线上演示,点击 [这里](https://huggingface.co/spaces/BAAI/FlagStudio) 在线试玩!
We used [AltCLIP](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md), and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en).
Our model performs well in aligning Chinese and English, and is the strongest open source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model.
AltDiffusion model is backed by a bilingual CLIP model named AltCLIP, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information.
AltDiffusion now supports online demo, try out it by clicking [here](https://huggingface.co/spaces/BAAI/FlagStudio)!
## 引用
关于AltCLIP,我们已经推出了相关报告,有更多细节可以查阅,如对您的工作有帮助,欢迎引用。
If you find this work helpful, please consider to cite
```
@article{https://doi.org/10.48550/arxiv.2211.06679,
doi = {10.48550/ARXIV.2211.06679},
url = {https://arxiv.org/abs/2211.06679},
author = {Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences},
title = {AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
# 模型权重 Model Weights
第一次运行AltDiffusion模型时会自动从 [这里](https://model.baai.ac.cn/model-detail/100076) 下载如下权重,
The following weights are automatically downloaded from [here](https://model.baai.ac.cn/model-detail/100076) when the AltDiffusion model is run for the first time:
| 模型名称 Model name | 大小 Size | 描述 Description |
|------------------------------|---------|-------------------------------------------------------|
| StableDiffusionSafetyChecker | 1.13G | 图片的安全检查器;Safety checker for image |
| AltDiffusion | 8.0G | 我们的双语AltDiffusion模型; Our bilingual AltDiffusion model |
| AltCLIP | 3.22G | 我们的双语AltCLIP模型;Our bilingual AltCLIP model |
# 示例 Example
## 🧨Diffusers Example
**AltDiffusion** 已被添加到 🧨Diffusers!
我们的[代码示例](https://colab.research.google.com/drive/1tBJGvocO4TBKBI22oKHtqyPDgcjqDSgF#scrollTo=QkNJDy4sVRBu)已放到colab上,欢迎使用。
您可以在 [此处](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion) 查看文档页面。
以下示例将使用fast DPM 调度程序生成图像, 在V100 上耗时大约为 2 秒。
You can run our diffusers example through [here](https://colab.research.google.com/drive/1tBJGvocO4TBKBI22oKHtqyPDgcjqDSgF#scrollTo=QkNJDy4sVRBu) in colab.
You can see the documentation page [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion).
The following example will use the fast DPM scheduler to generate an image in ca. 2 seconds on a V100.
First you should install diffusers main branch and some dependencies:
```
pip install git+https://github.com/huggingface/diffusers.git torch transformers accelerate sentencepiece
```
then you can run the following example:
```python
from diffusers import AltDiffusionPipeline, DPMSolverMultistepScheduler
import torch
pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion", torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图"
# or in English:
# prompt = "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap"
image = pipe(prompt, num_inference_steps=25).images[0]
image.save("./alt.png")
```

## Transformers Example
```python
import os
import torch
import transformers
from transformers import BertPreTrainedModel
from transformers.models.clip.modeling_clip import CLIPPreTrainedModel
from transformers.models.xlm_roberta.tokenization_xlm_roberta import XLMRobertaTokenizer
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
from diffusers import StableDiffusionPipeline
from transformers import BertPreTrainedModel,BertModel,BertConfig
import torch.nn as nn
import torch
from transformers.models.xlm_roberta.configuration_xlm_roberta import XLMRobertaConfig
from transformers import XLMRobertaModel
from transformers.activations import ACT2FN
from typing import Optional
class RobertaSeriesConfig(XLMRobertaConfig):
def __init__(self, pad_token_id=1, bos_token_id=0, eos_token_id=2,project_dim=768,pooler_fn='cls',learn_encoder=False, **kwargs):
super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
self.project_dim = project_dim
self.pooler_fn = pooler_fn
# self.learn_encoder = learn_encoder
class RobertaSeriesModelWithTransformation(BertPreTrainedModel):
_keys_to_ignore_on_load_unexpected = [r"pooler"]
_keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
base_model_prefix = 'roberta'
config_class= XLMRobertaConfig
def __init__(self, config):
super().__init__(config)
self.roberta = XLMRobertaModel(config)
self.transformation = nn.Linear(config.hidden_size, config.project_dim)
self.post_init()
def get_text_embeds(self,bert_embeds,clip_embeds):
return self.merge_head(torch.cat((bert_embeds,clip_embeds)))
def set_tokenizer(self, tokenizer):
self.tokenizer = tokenizer
def forward(self, input_ids: Optional[torch.Tensor] = None) :
attention_mask = (input_ids != self.tokenizer.pad_token_id).to(torch.int64)
outputs = self.base_model(
input_ids=input_ids,
attention_mask=attention_mask,
)
projection_state = self.transformation(outputs.last_hidden_state)
return (projection_state,)
model_path_encoder = "BAAI/RobertaSeriesModelWithTransformation"
model_path_diffusion = "BAAI/AltDiffusion"
device = "cuda"
seed = 12345
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path_encoder, use_auth_token=True)
tokenizer.model_max_length = 77
text_encoder = RobertaSeriesModelWithTransformation.from_pretrained(model_path_encoder, use_auth_token=True)
text_encoder.set_tokenizer(tokenizer)
print("text encode loaded")
pipe = StableDiffusionPipeline.from_pretrained(model_path_diffusion,
tokenizer=tokenizer,
text_encoder=text_encoder,
use_auth_token=True,
)
print("diffusion pipeline loaded")
pipe = pipe.to(device)
prompt = "Thirty years old lee evans as a sad 19th century postman. detailed, soft focus, candle light, interesting lights, realistic, oil canvas, character concept art by munkácsy mihály, csók istván, john everett millais, henry meynell rheam, and da vinci"
with torch.no_grad():
image = pipe(prompt, guidance_scale=7.5).images[0]
image.save("3.png")
```
您可以在`predict_generate_images`函数里通过改变参数来调整设置,具体信息如下:
More parameters of predict_generate_images for you to adjust for `predict_generate_images` are listed below:
| 参数名 Parameter | 类型 Type | 描述 Description |
|--------------------------------|------------|-------------------------------------------------------|
| prompt | str | 提示文本; The prompt text |
| out_path | str | 输出路径; The output path to save images |
| n_samples | int | 输出图片数量; Number of images to be generate |
| skip_grid | bool | 如果为True, 会将所有图片拼接在一起,输出一张新的图片; If set to true, image gridding step will be skipped |
| ddim_step | int | DDIM模型的步数; Number of steps in ddim model |
| plms | bool | 如果为True, 则会使用plms模型; If set to true, PLMS Sampler instead of DDIM Sampler will be applied |
| scale | float | 这个值决定了文本在多大程度上影响生成的图片,值越大影响力越强; This value determines how important the prompt incluences generate images |
| H | int | 图片的高度; Height of image |
| W | int | 图片的宽度; Width of image |
| C | int | 图片的channel数; Numeber of channels of generated images |
| seed | int | 随机种子; Random seed number |
注意:模型推理要求一张至少10G以上的GPU。
Note that the model inference requires a GPU of at least 10G above.
# 更多生成结果 More Results
## 中英文对齐能力 Chinese and English alignment ability
### prompt:dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap
### 英文生成结果/Generated results from English prompts

### prompt:黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图
### 中文生成结果/Generated results from Chinese prompts

## 中文表现能力/The performance for Chinese prompts
## prompt:带墨镜的男孩肖像,充满细节,8K高清

## prompt:带墨镜的中国男孩肖像,充满细节,8K高清

## 长图生成能力/The ability to generate long images
### prompt: 一只带着帽子的小狗
### 原版 stable diffusion:

### Ours:

注: 此处长图生成技术由右脑科技(RightBrain AI)提供。
Note: The long image generation technology here is provided by Right Brain Technology.
# 许可/License
该模型通过 [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 获得许可。作者对您生成的输出不主张任何权利,您可以自由使用它们并对它们的使用负责,不得违反本许可中的规定。该许可证禁止您分享任何违反任何法律、对他人造成伤害、传播任何可能造成伤害的个人信息、传播错误信息和针对弱势群体的任何内容。您可以出于商业目的修改和使用模型,但必须包含相同使用限制的副本。有关限制的完整列表,请[阅读许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 。
The model is licensed with a [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. You can modify and use the model for commercial purposes, but a copy of the same use restrictions must be included. For the full list of restrictions please [read the license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) . |
dccuchile/distilbert-base-spanish-uncased-finetuned-ner | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-15112022-cert3
co2_eq_emissions:
emissions: 0.08471612463898623
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2101567677
- CO2 Emissions (in grams): 0.0847
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 0.990
- Recall: 0.992
- F1: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-15112022-cert3-2101567677
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-15112022-cert3-2101567677", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-15112022-cert3-2101567677", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Certified-Zoomer/DialoGPT-small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: unknown
inference: false
---
# Samsamart model
**Hi**
I decided to make my own model, in the style of.. you know which artist. I wanted the faces to look little more like real ones, but at the same time keep the style
___
`instance_prompt`: samsamart style
Euler a: 20-40
CFG: 4-8
*--- do not use "ugly" as negative prompt! ---*


|
Chaewon/mnmt_decoder_en | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
---
### Building layouts on Stable Diffusion via Dreambooth
#### model by ThrinathMphasis
This your the Stable Diffusion model fine-tuned the Building layouts concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks building layout**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
Chaewon/mnmt_decoder_en_gpt2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: agpl-3.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# sentence-t5-base-nlpl-code-x-glue
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It has been trained on the with the [code_x_glue_tc_text_to_code](https://huggingface.co/datasets/code_x_glue_tc_text_to_code) dataset
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6250 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Chakita/KNUBert | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1885 | 0.8231 |
| 0.1443 | 2.0 | 1430 | 0.1607 | 0.8479 |
| 0.0937 | 3.0 | 2145 | 0.1637 | 0.8581 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
Chakita/Kalbert | [
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
---
*Text classification model SloBERTa-Trendi-Topics 1.0*
The SloBerta-Trendi-Topics model is a text classification model for categorizing news texts with one of 13 topic labels. It was trained on a set of approx. 36,000 Slovene texts from various Slovene news sources included in the Trendi Monitor Corpus of Slovene (http://hdl.handle.net/11356/1590) such as "rtvslo.si", "sta.si", "delo.si", "dnevnik.si", "vecer.com", "24ur.com", "siol.net", "gorenjskiglas.si", etc.
The texts were semi-automatically categorized into 13 categories based on the sections under which they were published (i.e. URLs). The set of labels was developed in accordance with related categorization schemas used in other corpora and comprises the following topics: "črna kronika" (crime and accidents), "gospodarstvo, posel, finance" (economy, business, finance), "izobraževanje" (education), "okolje" (environment), "prosti čas" (free time), "šport" (sport), "umetnost, kultura" (art, culture), "vreme" (weather), "zabava" (entertainment), "zdravje" (health), "znanost in tehnologija" (science and technology), "politika" (politics), and "družba" (society). The categorization process is explained in more detail in Kosem et al. (2022): https://nl.ijs.si/jtdh22/pdf/JTDH2022_Kosem-et-al_Spremljevalni-korpus-Trendi.pdf
The model was trained on the labeled texts using the SloBERTa 2.0 contextual embeddings model (https://huggingface.co/EMBEDDIA/sloberta, also available at CLARIN.SI: http://hdl.handle.net/11356/1397) and validated on a development set of 1,293 texts using the simpletransformers library and the following hyperparameters:
- Train batch size: 8
- Learning rate: 1e-5
- Max. sequence length: 512
- Number of epochs: 2
The model achieves a macro-F1-score of 0.94 on a test set of 1,295 texts (best for "črna kronika", "politika", "šport", and "vreme" at 0.98, worst for "prosti čas" at 0.83).
|
Chakita/gpt2_mwp | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Champion/test_upload_vox2_wavlm_epoch8 | [
"sidekit",
"audio"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2-shallow-bart-5k-1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-shallow-bart-5k-1e-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2228
- Wer: 219.0067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.4271 | 0.2 | 1000 | 5.5063 | 287.8938 |
| 4.9304 | 1.04 | 2000 | 5.4201 | 198.6618 |
| 4.6411 | 1.24 | 3000 | 5.2925 | 332.8481 |
| 4.3797 | 2.09 | 4000 | 5.2913 | 155.6744 |
| 4.2848 | 2.29 | 5000 | 5.2228 | 219.0067 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
CharlieChen/feedback-bigbird | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: main-sentiment-model-chats-2-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# main-sentiment-model-chats-2-labels
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3718
- Accuracy: 0.8567
- F1: 0.8459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Charlotte77/model_test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-15112022-cert4
co2_eq_emissions:
emissions: 18.550679640609356
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2103567748
- CO2 Emissions (in grams): 18.5507
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 0.990
- Recall: 0.992
- F1: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-15112022-cert4-2103567748
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-15112022-cert4-2103567748", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-15112022-cert4-2103567748", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
ChauhanVipul/BERT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T09:54:50Z | Brandschutzunterweisung
Sinn und Zielsetzung einer gesetzlich geforderten Brandschutzunterweisung in Unternehmen und Einrichtungen sind zuallererst, die Mitarbeiter durch fachlich kompetentes Training in die Lage zu versetzen, im Brandfall mit Besonnenheit die notwendigen Schritte durchzuführen.
[https://www.bs-bh.de/services/brandschutzunterweisung/](https://www.bs-bh.de/services/brandschutzunterweisung/) |
Cheatham/xlm-roberta-base-finetuned | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-jd_Nov15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-jd_Nov15
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 469 | 0.0547 |
| 1.8699 | 2.0 | 938 | 0.0090 |
| 0.0888 | 3.0 | 1407 | 0.0061 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Cheatham/xlm-roberta-large-finetuned-d12_2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole01
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 65.80 +/- 17.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Cheatham/xlm-roberta-large-finetuned4 | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
license: mit
---
### JRPG Monster art style via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by wooshim
This your the Stable Diffusion model fine-tuned the dtv_pkmn_monster_style concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **image**
Please use **"feralplmr"** in your prompt to trigger the style.
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
Low steps (40~) and Euler_a is highly suggested.
Here is a prompt to try:
((wings)), dragon, bowser, creature, ((monster)), ((dinosaur)) intricate large dragon , ((rathalos)), detailed artwork in ((feralplmr artsyle)), feral, fullbody, monster character, scales, reptile, dragon, claws, wings, ((detailed))
image

|
CheonggyeMountain-Sherpa/kogpt-trinity-poem | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-15T11:09:43Z | Indian wedding djs in Los Angeles
Get special deals on the best [Indian wedding DJs in Los Angeles](http://dhamakapros.com/) ! Check prices and availability because a great DJ will ensure that everyone has a great time at the wedding! |
Chinat/test-classifier | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-15112022-cert6
co2_eq_emissions:
emissions: 0.0843114319344479
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2103867793
- CO2 Emissions (in grams): 0.0843
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 0.989
- Recall: 0.992
- F1: 0.990
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-15112022-cert6-2103867793
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-15112022-cert6-2103867793", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-15112022-cert6-2103867793", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Ching/negation_detector | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: mlflow-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlflow-test
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.7.1
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Chiuchiyin/DialoGPT-small-Donald | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-15T11:58:09Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 75 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 75,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Chiuchiyin/Donald | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.50 +/- 7.03
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ChoboAvenger/DialoGPT-small-joshua | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T12:07:39Z | ---
language: pt
tags:
- legal
license: cc-by-sa-4.0
---
# LegalBERT Tokenizer
**LegalBERT** tokenizer is a word level byte-pair encoding with
vocabulary size of 52k tokens (containing the most common words in legal documents), based on the [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) tokenizer. The tokenizer was trained on data provided by the **BRAZILIAN SUPREME FEDERAL TRIBUNAL**, through the terms of use: [LREC 2020](https://ailab.unb.br/victor/lrec2020).
Tokenizer utilize `BertTokenizer` implementation from [transformers](https://github.com/huggingface/transformers).
**NOTE**: The results of this project do not imply in any way the position of the BRAZILIAN SUPREME FEDERAL TRIBUNAL, all being the sole and exclusive responsibility of the author.
## Tokenizer usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dominguesm/legal-bert-tokenizer")
example = ""
tokens = tokenizer.tokenize(example)
```
### Comparison of results
**Original Text**: ```De ordem, a Secretaria Judiciária do Supremo Tribunal Federal INTIMA a parte abaixo identificada, ou quem as suas vezes fizer, do inteiro teor do(a) despacho/decisão presente nos autos (art. 270 do Código de Processo Cívil e art 5º da Lei 11.419/2006).```
| Tokenizer | Tokens | Num. Tokens |
| --------- | ------ | ----------- |
| BERTimbau | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judic', '##iária', 'do', 'Supremo', 'Tribunal', 'Federal', 'IN', '##TI', '##MA', 'a', 'parte', 'abaixo', 'identificada', ',', 'ou', 'quem', 'as', 'suas', 'vezes', 'fiz', '##er', ',', 'do', 'inteiro', 'teor', 'do', '(', 'a', ')', 'despa', '##cho', '/', 'decisão', 'presente', 'nos', 'auto', '##s', '(', 'art', '.', '27', '##0', 'do', 'Código', 'de', 'Processo', 'Cí', '##vil', 'e', 'art', '[UNK]', 'da', 'Lei', '11', '.', '41', '##9', '/', '2006', ')', '.']``` | 66 |
| LegalBERT | ```['De', 'ordem', ',', 'a', 'Secretaria', 'Judiciária', 'do', 'Supremo', 'Tribunal', 'Federal', 'INTIMA', 'a', 'parte', 'abaixo', 'identificada', ',', 'ou', 'quem', 'as', 'suas', 'vezes', 'fizer', ',', 'do', 'inteiro', 'teor', 'do', '(', 'a', ')', 'despacho', '/', 'decisão', 'presente', 'nos', 'autos', '(', 'art', '.', '270', 'do', 'Código', 'de', 'Processo', 'Cív', '##il', 'e', 'art', '5º', 'da', 'Lei', '11', '.', '419', '/', '2006', ')', '.']``` | 58 |
## Citation
If you use this tokenizer, please cite:
```
@misc {maicon_domingues_2022,
author = { {Maicon Domingues} },
title = { legal-bert-tokenizer (Revision d8e9d4a) },
year = 2022,
url = { https://huggingface.co/dominguesm/legal-bert-tokenizer },
doi = { 10.57967/hf/0110 },
publisher = { Hugging Face }
}
```
## Contacts:
* <a href="mailto:[email protected]">[email protected]</a>
* [NLP.ROCKS](http://nlp.rocks)
|
ChrisP/xlm-roberta-base-finetuned-marc-en | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T15:23:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-german-cased-finetuned-jl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-german-cased-finetuned-jl
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.1 | 1000 | 1.5731 |
| No log | 0.19 | 2000 | 1.4019 |
| No log | 0.29 | 3000 | 1.3042 |
| No log | 0.39 | 4000 | 1.2398 |
| No log | 0.48 | 5000 | 1.1949 |
| No log | 0.58 | 6000 | 1.1584 |
| No log | 0.68 | 7000 | 1.1296 |
| No log | 0.77 | 8000 | 1.1055 |
| No log | 0.87 | 9000 | 1.0842 |
| No log | 0.97 | 10000 | 1.0680 |
| No log | 1.06 | 11000 | 1.0521 |
| No log | 1.16 | 12000 | 1.0388 |
| No log | 1.26 | 13000 | 1.0248 |
| No log | 1.35 | 14000 | 1.0154 |
| No log | 1.45 | 15000 | 1.0051 |
| No log | 1.55 | 16000 | 0.9981 |
| No log | 1.64 | 17000 | 0.9891 |
| No log | 1.74 | 18000 | 0.9827 |
| No log | 1.84 | 19000 | 0.9765 |
| No log | 1.93 | 20000 | 0.9714 |
| 1.2477 | 2.03 | 21000 | 0.9672 |
| 1.2477 | 2.13 | 22000 | 0.9613 |
| 1.2477 | 2.22 | 23000 | 0.9582 |
| 1.2477 | 2.32 | 24000 | 0.9548 |
| 1.2477 | 2.42 | 25000 | 0.9508 |
| 1.2477 | 2.51 | 26000 | 0.9491 |
| 1.2477 | 2.61 | 27000 | 0.9466 |
| 1.2477 | 2.71 | 28000 | 0.9458 |
| 1.2477 | 2.8 | 29000 | 0.9446 |
| 1.2477 | 2.9 | 30000 | 0.9431 |
| 1.2477 | 3.0 | 31000 | 0.9427 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.9.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ChrisVCB/DialoGPT-medium-ej | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.68561872909699
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4175
- F1: 0.6856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1397 | 1.0 | 50 | 0.5561 | 0.5147 |
| 0.5148 | 2.0 | 100 | 0.4851 | 0.6312 |
| 0.3772 | 3.0 | 150 | 0.4175 | 0.6856 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ChristopherA08/IndoELECTRA | [
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"transformers"
]
| null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2022-11-15T12:18:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-NST-TPU-test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-NST-TPU-test2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9943
- Wer: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 5.535 | 1.0 | 2 | 4.9943 | 100.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Chun/DialoGPT-medium-dailydialog | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-15T12:39:17Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1309.17 +/- 78.04
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Chun/w-en2zh-mtm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfun
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cu111
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Chun/w-en2zh-otm | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MBartForConditionalGeneration"
],
"model_type": "mbart",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: openrail
---
## Dreambooth model for a high-tech, detailed concept art style
This is a model trained on a mix of real images of fighter aircraft, warships, and spacecraft, and techy, detailed concept art from Aaron Beck, Paul Chadeisson and Rasmus Poulsen. High-tech, industrial sci-fi with a grungy aesthetic.
Use prompt: 'combotechsf'
## Example images









 |
Chungu424/DATA | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
datasets:
- AmazonScience/massive
language:
- ru
library_name: transformers
pipeline_tag: text-classification
train-eval-index:
- config: ru-RU
task: text-classification
task_id: multi_class_classification
splits:
eval_split: test
col_mapping:
utt: text
intent: target
--- |
Chungu424/qazwsx | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T13:04:55Z | ---
license: openrail
---
## Dreambooth model for a retro, pulp science fiction art style
This is a model trained on classic/retro science fiction illustrations using works from Chris Foss, John Harris, Syd Mead, Robert McCall and Philippe Bouchet. Mostly trained on space scenes with a few landscapes so it tends to produce spaceships unless otherwise prompted.
Use prompt: 'manchu'
## Example images







 |
Ciruzzo/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuned-sentiment-model-5000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9126
- name: F1
type: f1
value: 0.9149640007783615
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-sentiment-model-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6140
- Accuracy: 0.9126
- F1: 0.9150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu116
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Ciruzzo/DialoGPT-small-hattypotter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T13:35:20Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: malayalam-news
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# malayalam-news
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.9255
- Validation Loss: 10.9247
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -999, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.9636 | 10.9321 | 0 |
| 10.9425 | 10.9296 | 1 |
| 10.9255 | 10.9247 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
CleveGreen/JobClassifier | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results1
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2845
- Accuracy: 0.933
- F1: 0.8024
- Precision: 0.8625
- Recall: 0.7632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2661 | 1.0 | 2500 | 0.2845 | 0.933 | 0.8024 | 0.8625 | 0.7632 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
CleveGreen/JobClassifier_v2 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | 2022-11-15T13:56:35Z | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/tuwonga/zukki_style/resolve/main/zukki_style_prev.jpg"
tags:
- stable-diffusion
- text-to-image
---
### zukki_style
This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from **Ma vie de courgette** stop motion animation movie.Use the token **_zukki_style_** in your prompts to use the style.
_Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._
_Actually this is an experimental model because mainly is possible to render charcters instead of scene/landscapes but I've found more interesting the output in the img2img than txt2img. You can see the results in the second and third pic (original/img2img/img2img). I think would be better to check the "restore faces" option and play around with denoising strength._
--
**Characters rendered with this model:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30, Sampler: Euler, CFG scale: 7.5**_
--
**Characters rendered with img2img:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30 - you can play around with settings**_
--
**Characters rendered with img2img:**

_prompt and settings used: **[person] in zukki_style** | **Steps: 30 - you can play around with settings**_
--
This model was trained with Dreambooth training by TheLastBen, using 32 images at 6400 steps with 25% of text encoder.
--
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
CleveGreen/JobClassifier_v2_gpt | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"GPT2ForSequenceClassification"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | 2022-11-15T13:56:42Z | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-15112022-cert7
co2_eq_emissions:
emissions: 0.08177433184040792
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2105067828
- CO2 Emissions (in grams): 0.0818
## Validation Metrics
- Loss: 0.002
- Accuracy: 0.999
- Precision: 0.991
- Recall: 0.992
- F1: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-15112022-cert7-2105067828
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-15112022-cert7-2105067828", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-15112022-cert7-2105067828", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Cloudy/DialoGPT-CJ-large | [
"pytorch",
"conversational"
]
| conversational | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
language: is
license: apache-2.0
widget:
- text: "Yonum vjer að pað pví fremur fái góðar viðtökur, par sem svo lítur út, sem aldrei muni verða svo heiðskýrt á pessum vetri að „Noi'ðurljósið“ sjáist, eu paðan væntum vér allir skemmtunar."
---
# Details of ByT5 - Base 🧠
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
# Details of byt5-is-ocr-post-processing-old-texts
This model generates a revised version of a given Icelandic OCRed text. The model was trained with [simpleT5](https://github.com/Shivanandroy/simpleT5) on 900.000 lines (\~7.000.000 tokens) of which only 50.000 (\~400.000 tokens) were from real OCRed texts. The rest were extracted from [The Icelandic Gigaword Corpus](https://clarin.is/en/resources/gigaword/) and augmented with artificial errors. It can be assumed that increasing the amount of OCRed data can significantly improve the model.
For inference, it is recommended to feed the model one line (not necessarily whole sentences, though) at a time.
# Usage
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
MODEL = 'atlijas/byt5-is-ocr-post-processing-old-texts'
correct_ocr = pipeline('text2text-generation', model=MODEL, tokenizer=MODEL, num_return_sequences=1)
dataset = load_dataset('/path/to/', data_files='my_ocred_file.txt')
lines = dataset['train']
file_length = len(lines)
for corrected in correct_ocr(KeyDataset(lines, 'text'), max_length=150, batch_size=32):
print(corrected[0]['generated_text'])
```
# Evaluation results
The test set for this model consists of various Icelandic texts from the 19th and early 20th century. On it, the model achieves a chrF error rate reduction of 39.3%, with the original text's score being 94.6, and the processed one's 96.7. The model achieves a proportional BLEU improvement of 51.6%, with the original text's BLEU score being 97.2 and the processed one's 98.6.
# Acknowledgments
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture. |
CoShin/XLM-roberta-large_ko_en_nil_sts | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T14:15:22Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8422468886646486
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2661
- F1: 0.8422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5955 | 1.0 | 191 | 0.3344 | 0.7932 |
| 0.2556 | 2.0 | 382 | 0.2923 | 0.8252 |
| 0.1741 | 3.0 | 573 | 0.2661 | 0.8422 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
CodeDanCode/CartmenBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | 2022-11-15T14:28:49Z | ---
tags:
- spacy
- token-classification
language:
- es
license: gpl-3.0
model-index:
- name: es_cantemist_ner_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8487622923
- name: NER Recall
type: recall
value: 0.8416274378
- name: NER F Score
type: f_score
value: 0.8451798075
widget:
- text: "JUICIO DIAGNÓSTICO Encefalitis límbica y polineuropatía sensitiva paraneoplásicas secundarias a carcinoma microcítico de pulmón cTxN2 M0 (enfermedad limitada) ."
---
Basic Spacy BioNER pipeline, with a RoBERTa-based model [bsc-bio-ehr-es] (https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) and a dataset, CANTEMIST, annotated with tumour morphology entities. For further information, check the [official website](https://temu.bsc.es/cantemist/). Visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL
| Feature | Description |
| --- | --- |
| **Name** | `es_cantemist_ner_trf` |
| **Version** | `3.4.0` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | https://huggingface.co/datasets/PlanTL-GOB-ES/cantemist-ner |
| **License** | `[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)` |
| **Author** | [The Text Mining Unit from Barcelona Supercomputing Center.](https://huggingface.co/PlanTL-GOB-ES) |
| **Copyright** | Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022) |
| **Funding** | This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `MORFOLOGIA_NEOPLASIA` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 84.52 |
| `ENTS_P` | 84.88 |
| `ENTS_R` | 84.16 |
| `TRANSFORMER_LOSS` | 25646.78 |
| `NER_LOSS` | 9622.84 | |
CodeDanCode/SP-KyleBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2022-11-15T14:40:59Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 24.40 +/- 8.85
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
CodeNinja1126/bert-p-encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 128 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 128,
"warmup_steps": 13,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Venkatakrishnan-Ramesh/Text_gen | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T15:06:54Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-emotion-37-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-emotion-37-labels
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1765
- Accuracy: 0.7185
- F1: 0.7178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.4256 | 1.0 | 433 | 1.7594 | 0.4384 | 0.4079 |
| 1.5536 | 2.0 | 866 | 1.3105 | 0.5784 | 0.5631 |
| 1.1753 | 3.0 | 1299 | 1.1767 | 0.6163 | 0.6057 |
| 0.9378 | 4.0 | 1732 | 1.0613 | 0.6565 | 0.6542 |
| 0.7606 | 5.0 | 2165 | 1.0284 | 0.6808 | 0.6776 |
| 0.6167 | 6.0 | 2598 | 1.0128 | 0.6892 | 0.6888 |
| 0.5009 | 7.0 | 3031 | 1.0250 | 0.6973 | 0.6946 |
| 0.4083 | 8.0 | 3464 | 1.0506 | 0.7014 | 0.6996 |
| 0.328 | 9.0 | 3897 | 1.0658 | 0.7075 | 0.7079 |
| 0.2704 | 10.0 | 4330 | 1.0874 | 0.7106 | 0.7094 |
| 0.2203 | 11.0 | 4763 | 1.1587 | 0.7031 | 0.7010 |
| 0.1813 | 12.0 | 5196 | 1.1559 | 0.7141 | 0.7130 |
| 0.1552 | 13.0 | 5629 | 1.1483 | 0.7173 | 0.7164 |
| 0.1325 | 14.0 | 6062 | 1.1697 | 0.7173 | 0.7170 |
| 0.1239 | 15.0 | 6495 | 1.1765 | 0.7185 | 0.7178 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
CoffeeAddict93/gpt2-medium-modest-proposal | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 155 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 155,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Connorvr/BrightBot-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-15T16:28:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4714 | 1.0 | 2225 | 2.1447 |
| 1.5082 | 2.0 | 4450 | 2.1082 |
| 0.6158 | 3.0 | 6675 | 2.6021 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Contrastive-Tension/BERT-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-15T16:51:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-20percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-20percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6513
- Precision: 0.5252
- Recall: 0.6562
- F1: 0.5834
- Accuracy: 0.8044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.9155 | 0.3511 | 0.4264 | 0.3851 | 0.7353 |
| No log | 2.0 | 30 | 0.7116 | 0.4845 | 0.6321 | 0.5485 | 0.7898 |
| No log | 3.0 | 45 | 0.6513 | 0.5252 | 0.6562 | 0.5834 | 0.8044 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Cool/Demo | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### iman_maleki_morteza_koutzian on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by apurik-parv
This your the Stable Diffusion model fine-tuned the artwork of iman_maleki and morteza_koutzian two iranian painters, concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **imamk**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb),
imamk
.png)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
DCU-NLP/electra-base-irish-cased-discriminator-v1 | [
"pytorch",
"electra",
"pretraining",
"ga",
"transformers",
"irish",
"license:apache-2.0"
]
| null | {
"architectures": [
"ElectraForPreTraining"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_100_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_100_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 16.6444 | 0.19 | 500 | 8.8209 |
| 8.1904 | 0.39 | 1000 | 7.5197 |
| 7.3572 | 0.58 | 1500 | 7.1037 |
| 7.0042 | 0.77 | 2000 | 6.8817 |
| 6.8626 | 0.97 | 2500 | 6.7353 |
| 6.7582 | 1.16 | 3000 | 6.5802 |
| 6.6345 | 1.36 | 3500 | 6.4232 |
| 6.5817 | 1.55 | 4000 | 6.3375 |
| 6.517 | 1.74 | 4500 | 6.3439 |
| 6.4531 | 1.94 | 5000 | 6.2870 |
| 6.469 | 2.13 | 5500 | 6.3208 |
| 6.4503 | 2.32 | 6000 | 6.2136 |
| 6.3679 | 2.52 | 6500 | 6.2260 |
| 6.4032 | 2.71 | 7000 | 6.2015 |
| 6.357 | 2.9 | 7500 | 6.2363 |
| 6.3349 | 3.1 | 8000 | 6.2101 |
| 6.342 | 3.29 | 8500 | 6.2031 |
| 6.3047 | 3.49 | 9000 | 6.1945 |
| 6.3204 | 3.68 | 9500 | 6.1681 |
| 6.2935 | 3.87 | 10000 | 6.1999 |
| 6.3319 | 4.07 | 10500 | 6.1613 |
| 6.2528 | 4.26 | 11000 | 6.1354 |
| 6.2683 | 4.45 | 11500 | 6.2427 |
| 6.2572 | 4.65 | 12000 | 6.1477 |
| 6.2509 | 4.84 | 12500 | 6.1770 |
| 6.2402 | 5.03 | 13000 | 6.1779 |
| 6.2412 | 5.23 | 13500 | 6.1516 |
| 6.2291 | 5.42 | 14000 | 6.1498 |
| 6.2203 | 5.62 | 14500 | 6.1804 |
| 6.2341 | 5.81 | 15000 | 6.1501 |
| 6.2242 | 6.0 | 15500 | 6.1239 |
| 6.2163 | 6.2 | 16000 | 6.1567 |
| 6.2079 | 6.39 | 16500 | 6.1188 |
| 6.2176 | 6.58 | 17000 | 6.1620 |
| 6.1926 | 6.78 | 17500 | 6.1635 |
| 6.1743 | 6.97 | 18000 | 6.0749 |
| 6.1978 | 7.16 | 18500 | 6.1316 |
| 6.1868 | 7.36 | 19000 | 6.0297 |
| 6.19 | 7.55 | 19500 | 6.1126 |
| 6.2005 | 7.75 | 20000 | 6.0985 |
| 6.2056 | 7.94 | 20500 | 6.1100 |
| 6.1628 | 8.13 | 21000 | 6.1321 |
| 6.169 | 8.33 | 21500 | 6.0842 |
| 6.1636 | 8.52 | 22000 | 6.1205 |
| 6.1278 | 8.71 | 22500 | 6.1270 |
| 6.1656 | 8.91 | 23000 | 6.1049 |
| 6.1526 | 9.1 | 23500 | 6.1462 |
| 6.1624 | 9.3 | 24000 | 6.0534 |
| 6.1353 | 9.49 | 24500 | 6.0862 |
| 6.1264 | 9.68 | 25000 | 6.0844 |
| 6.1648 | 9.88 | 25500 | 6.1206 |
| 6.1574 | 10.07 | 26000 | 6.0942 |
| 6.0971 | 10.26 | 26500 | 6.1151 |
| 6.119 | 10.46 | 27000 | 6.1148 |
| 6.1217 | 10.65 | 27500 | 6.1076 |
| 6.1054 | 10.84 | 28000 | 6.1457 |
| 6.1402 | 11.04 | 28500 | 6.0442 |
| 6.1124 | 11.23 | 29000 | 6.1404 |
| 6.1457 | 11.43 | 29500 | 6.0622 |
| 6.1248 | 11.62 | 30000 | 6.1377 |
| 6.1204 | 11.81 | 30500 | 6.1056 |
| 6.1097 | 12.01 | 31000 | 6.0780 |
| 6.0713 | 12.2 | 31500 | 6.0061 |
| 6.1119 | 12.39 | 32000 | 6.1671 |
| 6.0744 | 12.59 | 32500 | 6.1235 |
| 6.082 | 12.78 | 33000 | 6.0905 |
| 6.0962 | 12.97 | 33500 | 6.0936 |
| 6.1265 | 13.17 | 34000 | 6.0786 |
| 6.0941 | 13.36 | 34500 | 6.0944 |
| 6.0694 | 13.56 | 35000 | 6.0988 |
| 6.0784 | 13.75 | 35500 | 6.1221 |
| 6.0749 | 13.94 | 36000 | 6.0961 |
| 6.103 | 14.14 | 36500 | 6.0141 |
| 6.0944 | 14.33 | 37000 | 6.0812 |
| 6.0869 | 14.52 | 37500 | 6.0423 |
| 6.0986 | 14.72 | 38000 | 6.1194 |
| 6.0759 | 14.91 | 38500 | 6.0504 |
| 6.0592 | 15.1 | 39000 | 6.1483 |
| 6.0624 | 15.3 | 39500 | 6.0978 |
| 6.077 | 15.49 | 40000 | 6.0585 |
| 6.0581 | 15.69 | 40500 | 6.0355 |
| 6.0612 | 15.88 | 41000 | 6.0742 |
| 6.0536 | 16.07 | 41500 | 6.1443 |
| 6.057 | 16.27 | 42000 | 6.0574 |
| 6.0419 | 16.46 | 42500 | 6.0372 |
| 6.0076 | 16.65 | 43000 | 6.0624 |
| 6.0773 | 16.85 | 43500 | 6.0624 |
| 6.0317 | 17.04 | 44000 | 6.0738 |
| 6.0248 | 17.23 | 44500 | 6.0959 |
| 6.0459 | 17.43 | 45000 | 6.0636 |
| 6.0332 | 17.62 | 45500 | 6.0536 |
| 6.0319 | 17.82 | 46000 | 6.0659 |
| 6.0363 | 18.01 | 46500 | 6.0154 |
| 6.0001 | 18.2 | 47000 | 6.0082 |
| 5.9719 | 18.4 | 47500 | 6.0778 |
| 6.0332 | 18.59 | 48000 | 6.0491 |
| 6.0061 | 18.78 | 48500 | 5.9457 |
| 5.9675 | 18.98 | 49000 | 5.9768 |
| 5.9749 | 19.17 | 49500 | 6.0173 |
| 5.9944 | 19.36 | 50000 | 5.9981 |
| 6.0248 | 19.56 | 50500 | 5.9255 |
| 5.9774 | 19.75 | 51000 | 6.0158 |
| 5.9768 | 19.95 | 51500 | 5.9443 |
| 5.9499 | 20.14 | 52000 | 5.9708 |
| 5.979 | 20.33 | 52500 | 5.9296 |
| 5.9881 | 20.53 | 53000 | 5.9506 |
| 5.9775 | 20.72 | 53500 | 5.9266 |
| 5.9361 | 20.91 | 54000 | 5.9270 |
| 5.9427 | 21.11 | 54500 | 5.9461 |
| 5.9396 | 21.3 | 55000 | 5.9156 |
| 5.9596 | 21.49 | 55500 | 5.9185 |
| 5.9079 | 21.69 | 56000 | 5.9630 |
| 5.9579 | 21.88 | 56500 | 5.8991 |
| 5.9564 | 22.08 | 57000 | 5.9097 |
| 5.9225 | 22.27 | 57500 | 5.9452 |
| 5.9202 | 22.46 | 58000 | 5.8680 |
| 5.9103 | 22.66 | 58500 | 5.8985 |
| 5.9106 | 22.85 | 59000 | 5.8656 |
| 5.913 | 23.04 | 59500 | 5.8292 |
| 5.9249 | 23.24 | 60000 | 5.8420 |
| 5.8948 | 23.43 | 60500 | 5.8782 |
| 5.9273 | 23.63 | 61000 | 5.8952 |
| 5.8788 | 23.82 | 61500 | 5.8438 |
| 5.898 | 24.01 | 62000 | 5.8705 |
| 5.8809 | 24.21 | 62500 | 5.7648 |
| 5.8953 | 24.4 | 63000 | 5.8283 |
| 5.9177 | 24.59 | 63500 | 5.7760 |
| 5.8809 | 24.79 | 64000 | 5.8144 |
| 5.8994 | 24.98 | 64500 | 5.8348 |
| 5.8817 | 25.17 | 65000 | 5.8334 |
| 5.8701 | 25.37 | 65500 | 5.7240 |
| 5.8518 | 25.56 | 66000 | 5.8187 |
| 5.8406 | 25.76 | 66500 | 5.8133 |
| 5.859 | 25.95 | 67000 | 5.7331 |
| 5.8627 | 26.14 | 67500 | 5.7711 |
| 5.8727 | 26.34 | 68000 | 5.7598 |
| 5.8295 | 26.53 | 68500 | 5.8364 |
| 5.8216 | 26.72 | 69000 | 5.7586 |
| 5.8458 | 26.92 | 69500 | 5.7413 |
| 5.8597 | 27.11 | 70000 | 5.7444 |
| 5.842 | 27.3 | 70500 | 5.7288 |
| 5.8254 | 27.5 | 71000 | 5.7811 |
| 5.8285 | 27.69 | 71500 | 5.7120 |
| 5.8106 | 27.89 | 72000 | 5.6733 |
| 5.8073 | 28.08 | 72500 | 5.7163 |
| 5.7932 | 28.27 | 73000 | 5.7258 |
| 5.7919 | 28.47 | 73500 | 5.6985 |
| 5.7881 | 28.66 | 74000 | 5.7321 |
| 5.7942 | 28.85 | 74500 | 5.6545 |
| 5.8011 | 29.05 | 75000 | 5.6799 |
| 5.8071 | 29.24 | 75500 | 5.7270 |
| 5.784 | 29.43 | 76000 | 5.6806 |
| 5.7774 | 29.63 | 76500 | 5.6918 |
| 5.7345 | 29.82 | 77000 | 5.7138 |
| 5.7863 | 30.02 | 77500 | 5.7072 |
| 5.7774 | 30.21 | 78000 | 5.6649 |
| 5.7954 | 30.4 | 78500 | 5.6150 |
| 5.7624 | 30.6 | 79000 | 5.6398 |
| 5.7296 | 30.79 | 79500 | 5.6216 |
| 5.7053 | 30.98 | 80000 | 5.5447 |
| 5.7688 | 31.18 | 80500 | 5.6245 |
| 5.7254 | 31.37 | 81000 | 5.6100 |
| 5.755 | 31.56 | 81500 | 5.6257 |
| 5.7854 | 31.76 | 82000 | 5.6330 |
| 5.7351 | 31.95 | 82500 | 5.5588 |
| 5.7233 | 32.15 | 83000 | 5.5590 |
| 5.7225 | 32.34 | 83500 | 5.5480 |
| 5.7451 | 32.53 | 84000 | 5.6075 |
| 5.6989 | 32.73 | 84500 | 5.5447 |
| 5.7245 | 32.92 | 85000 | 5.5353 |
| 5.7132 | 33.11 | 85500 | 5.5563 |
| 5.7187 | 33.31 | 86000 | 5.5177 |
| 5.7203 | 33.5 | 86500 | 5.5630 |
| 5.6948 | 33.69 | 87000 | 5.5357 |
| 5.7118 | 33.89 | 87500 | 5.5367 |
| 5.6763 | 34.08 | 88000 | 5.4824 |
| 5.6923 | 34.28 | 88500 | 5.4489 |
| 5.6803 | 34.47 | 89000 | 5.5113 |
| 5.6977 | 34.66 | 89500 | 5.4829 |
| 5.6834 | 34.86 | 90000 | 5.4640 |
| 5.6596 | 35.05 | 90500 | 5.4816 |
| 5.6513 | 35.24 | 91000 | 5.4522 |
| 5.6687 | 35.44 | 91500 | 5.3984 |
| 5.6866 | 35.63 | 92000 | 5.4538 |
| 5.6479 | 35.82 | 92500 | 5.3811 |
| 5.6308 | 36.02 | 93000 | 5.3664 |
| 5.6299 | 36.21 | 93500 | 5.3788 |
| 5.6263 | 36.41 | 94000 | 5.3367 |
| 5.6305 | 36.6 | 94500 | 5.4058 |
| 5.6065 | 36.79 | 95000 | 5.3011 |
| 5.6236 | 36.99 | 95500 | 5.3301 |
| 5.6191 | 37.18 | 96000 | 5.3643 |
| 5.5991 | 37.37 | 96500 | 5.3917 |
| 5.6044 | 37.57 | 97000 | 5.3284 |
| 5.6001 | 37.76 | 97500 | 5.3199 |
| 5.5758 | 37.96 | 98000 | 5.2644 |
| 5.567 | 38.15 | 98500 | 5.3054 |
| 5.5404 | 38.34 | 99000 | 5.3473 |
| 5.5677 | 38.54 | 99500 | 5.2537 |
| 5.5676 | 38.73 | 100000 | 5.3135 |
| 5.5608 | 38.92 | 100500 | 5.2030 |
| 5.5523 | 39.12 | 101000 | 5.2808 |
| 5.545 | 39.31 | 101500 | 5.2114 |
| 5.5117 | 39.5 | 102000 | 5.2167 |
| 5.5403 | 39.7 | 102500 | 5.1930 |
| 5.5166 | 39.89 | 103000 | 5.1737 |
| 5.5267 | 40.09 | 103500 | 5.2112 |
| 5.5116 | 40.28 | 104000 | 5.2007 |
| 5.4874 | 40.47 | 104500 | 5.1654 |
| 5.5144 | 40.67 | 105000 | 5.1378 |
| 5.4683 | 40.86 | 105500 | 5.2039 |
| 5.4978 | 41.05 | 106000 | 5.1436 |
| 5.4781 | 41.25 | 106500 | 5.1642 |
| 5.5052 | 41.44 | 107000 | 5.1245 |
| 5.4844 | 41.63 | 107500 | 5.1809 |
| 5.4853 | 41.83 | 108000 | 5.0201 |
| 5.4814 | 42.02 | 108500 | 5.1054 |
| 5.4529 | 42.22 | 109000 | 5.1489 |
| 5.4804 | 42.41 | 109500 | 5.0555 |
| 5.4534 | 42.6 | 110000 | 5.0705 |
| 5.4401 | 42.8 | 110500 | 5.0464 |
| 5.45 | 42.99 | 111000 | 5.0069 |
| 5.4547 | 43.18 | 111500 | 5.0655 |
| 5.4212 | 43.38 | 112000 | 5.0563 |
| 5.3913 | 43.57 | 112500 | 5.0514 |
| 5.4268 | 43.76 | 113000 | 4.9936 |
| 5.3926 | 43.96 | 113500 | 5.0101 |
| 5.3882 | 44.15 | 114000 | 5.0294 |
| 5.4014 | 44.35 | 114500 | 5.0560 |
| 5.417 | 44.54 | 115000 | 4.9827 |
| 5.4012 | 44.73 | 115500 | 4.9811 |
| 5.3697 | 44.93 | 116000 | 4.9288 |
| 5.3991 | 45.12 | 116500 | 4.9576 |
| 5.3711 | 45.31 | 117000 | 4.9339 |
| 5.4081 | 45.51 | 117500 | 4.9250 |
| 5.3531 | 45.7 | 118000 | 4.8725 |
| 5.3826 | 45.89 | 118500 | 4.9501 |
| 5.3798 | 46.09 | 119000 | 4.9958 |
| 5.3415 | 46.28 | 119500 | 4.9327 |
| 5.3786 | 46.48 | 120000 | 4.8616 |
| 5.3862 | 46.67 | 120500 | 4.8863 |
| 5.3606 | 46.86 | 121000 | 4.9151 |
| 5.3605 | 47.06 | 121500 | 4.9053 |
| 5.3455 | 47.25 | 122000 | 4.9110 |
| 5.3264 | 47.44 | 122500 | 4.8673 |
| 5.3409 | 47.64 | 123000 | 4.8346 |
| 5.3567 | 47.83 | 123500 | 4.8996 |
| 5.3103 | 48.02 | 124000 | 4.8342 |
| 5.3244 | 48.22 | 124500 | 4.8464 |
| 5.3324 | 48.41 | 125000 | 4.8729 |
| 5.3273 | 48.61 | 125500 | 4.8125 |
| 5.31 | 48.8 | 126000 | 4.8519 |
| 5.2872 | 48.99 | 126500 | 4.8693 |
| 5.3066 | 49.19 | 127000 | 4.8600 |
| 5.302 | 49.38 | 127500 | 4.8171 |
| 5.2875 | 49.57 | 128000 | 4.7911 |
| 5.2806 | 49.77 | 128500 | 4.8004 |
| 5.3108 | 49.96 | 129000 | 4.7977 |
| 5.2741 | 50.15 | 129500 | 4.8427 |
| 5.2603 | 50.35 | 130000 | 4.7938 |
| 5.282 | 50.54 | 130500 | 4.7997 |
| 5.2835 | 50.74 | 131000 | 4.8173 |
| 5.2628 | 50.93 | 131500 | 4.7610 |
| 5.3034 | 51.12 | 132000 | 4.7908 |
| 5.2635 | 51.32 | 132500 | 4.7676 |
| 5.3269 | 51.51 | 133000 | 4.8245 |
| 5.242 | 51.7 | 133500 | 4.7265 |
| 5.2516 | 51.9 | 134000 | 4.7588 |
| 5.2641 | 52.09 | 134500 | 4.7695 |
| 5.2493 | 52.29 | 135000 | 4.7327 |
| 5.2334 | 52.48 | 135500 | 4.7206 |
| 5.2483 | 52.67 | 136000 | 4.7289 |
| 5.2133 | 52.87 | 136500 | 4.8136 |
| 5.2495 | 53.06 | 137000 | 4.6620 |
| 5.2489 | 53.25 | 137500 | 4.7118 |
| 5.2415 | 53.45 | 138000 | 4.7011 |
| 5.231 | 53.64 | 138500 | 4.7295 |
| 5.2211 | 53.83 | 139000 | 4.7199 |
| 5.2327 | 54.03 | 139500 | 4.7146 |
| 5.2053 | 54.22 | 140000 | 4.6871 |
| 5.2117 | 54.42 | 140500 | 4.7097 |
| 5.1929 | 54.61 | 141000 | 4.6923 |
| 5.2199 | 54.8 | 141500 | 4.7291 |
| 5.211 | 55.0 | 142000 | 4.7088 |
| 5.2482 | 55.19 | 142500 | 4.6551 |
| 5.2043 | 55.38 | 143000 | 4.7244 |
| 5.1799 | 55.58 | 143500 | 4.7225 |
| 5.2053 | 55.77 | 144000 | 4.6948 |
| 5.1745 | 55.96 | 144500 | 4.7157 |
| 5.1673 | 56.16 | 145000 | 4.6555 |
| 5.2122 | 56.35 | 145500 | 4.6842 |
| 5.1701 | 56.55 | 146000 | 4.6581 |
| 5.2107 | 56.74 | 146500 | 4.6245 |
| 5.2454 | 56.93 | 147000 | 4.6399 |
| 5.2134 | 57.13 | 147500 | 4.6585 |
| 5.1753 | 57.32 | 148000 | 4.6233 |
| 5.1355 | 57.51 | 148500 | 4.6543 |
| 5.2032 | 57.71 | 149000 | 4.6640 |
| 5.1714 | 57.9 | 149500 | 4.6635 |
| 5.1769 | 58.09 | 150000 | 4.6256 |
| 5.1632 | 58.29 | 150500 | 4.6456 |
| 5.1556 | 58.48 | 151000 | 4.6647 |
| 5.1671 | 58.68 | 151500 | 4.6548 |
| 5.1482 | 58.87 | 152000 | 4.6107 |
| 5.104 | 59.06 | 152500 | 4.6320 |
| 5.1545 | 59.26 | 153000 | 4.6035 |
| 5.1338 | 59.45 | 153500 | 4.6512 |
| 5.1518 | 59.64 | 154000 | 4.6424 |
| 5.1937 | 59.84 | 154500 | 4.6123 |
| 5.1576 | 60.03 | 155000 | 4.6077 |
| 5.1643 | 60.22 | 155500 | 4.5990 |
| 5.1371 | 60.42 | 156000 | 4.6025 |
| 5.1535 | 60.61 | 156500 | 4.5939 |
| 5.128 | 60.81 | 157000 | 4.5716 |
| 5.1711 | 61.0 | 157500 | 4.5895 |
| 5.1265 | 61.19 | 158000 | 4.6367 |
| 5.1131 | 61.39 | 158500 | 4.6565 |
| 5.1239 | 61.58 | 159000 | 4.6194 |
| 5.1089 | 61.77 | 159500 | 4.6214 |
| 5.1052 | 61.97 | 160000 | 4.5982 |
| 5.1336 | 62.16 | 160500 | 4.5861 |
| 5.1081 | 62.35 | 161000 | 4.5343 |
| 5.1706 | 62.55 | 161500 | 4.5480 |
| 5.0848 | 62.74 | 162000 | 4.5500 |
| 5.0848 | 62.94 | 162500 | 4.5965 |
| 5.0849 | 63.13 | 163000 | 4.5737 |
| 5.1267 | 63.32 | 163500 | 4.5680 |
| 5.124 | 63.52 | 164000 | 4.5341 |
| 5.1212 | 63.71 | 164500 | 4.5154 |
| 5.1214 | 63.9 | 165000 | 4.5329 |
| 5.117 | 64.1 | 165500 | 4.4988 |
| 5.0578 | 64.29 | 166000 | 4.5582 |
| 5.0705 | 64.48 | 166500 | 4.5346 |
| 5.0814 | 64.68 | 167000 | 4.5978 |
| 5.0959 | 64.87 | 167500 | 4.5628 |
| 5.0601 | 65.07 | 168000 | 4.5449 |
| 5.1112 | 65.26 | 168500 | 4.5499 |
| 5.0946 | 65.45 | 169000 | 4.5344 |
| 5.0965 | 65.65 | 169500 | 4.5324 |
| 5.0958 | 65.84 | 170000 | 4.4937 |
| 5.081 | 66.03 | 170500 | 4.5009 |
| 5.0506 | 66.23 | 171000 | 4.5145 |
| 5.0729 | 66.42 | 171500 | 4.4779 |
| 5.0628 | 66.62 | 172000 | 4.5531 |
| 5.0674 | 66.81 | 172500 | 4.5023 |
| 5.0634 | 67.0 | 173000 | 4.5124 |
| 5.0847 | 67.2 | 173500 | 4.5203 |
| 5.0729 | 67.39 | 174000 | 4.4887 |
| 5.0683 | 67.58 | 174500 | 4.5113 |
| 5.0596 | 67.78 | 175000 | 4.4898 |
| 5.0528 | 67.97 | 175500 | 4.5359 |
| 5.0595 | 68.16 | 176000 | 4.5139 |
| 5.0864 | 68.36 | 176500 | 4.5260 |
| 5.0241 | 68.55 | 177000 | 4.5325 |
| 5.1038 | 68.75 | 177500 | 4.4692 |
| 5.073 | 68.94 | 178000 | 4.5429 |
| 5.0667 | 69.13 | 178500 | 4.4781 |
| 5.041 | 69.33 | 179000 | 4.5035 |
| 5.033 | 69.52 | 179500 | 4.5177 |
| 5.0369 | 69.71 | 180000 | 4.4948 |
| 5.0265 | 69.91 | 180500 | 4.5544 |
| 5.0687 | 70.1 | 181000 | 4.5048 |
| 5.0464 | 70.29 | 181500 | 4.4532 |
| 5.0502 | 70.49 | 182000 | 4.5503 |
| 4.9993 | 70.68 | 182500 | 4.5011 |
| 5.041 | 70.88 | 183000 | 4.4769 |
| 5.0603 | 71.07 | 183500 | 4.4642 |
| 5.0448 | 71.26 | 184000 | 4.4527 |
| 5.0702 | 71.46 | 184500 | 4.4807 |
| 5.0418 | 71.65 | 185000 | 4.4724 |
| 4.9976 | 71.84 | 185500 | 4.4915 |
| 5.0502 | 72.04 | 186000 | 4.4591 |
| 5.0438 | 72.23 | 186500 | 4.4292 |
| 4.9812 | 72.42 | 187000 | 4.4252 |
| 5.0377 | 72.62 | 187500 | 4.4512 |
| 5.0117 | 72.81 | 188000 | 4.4617 |
| 4.976 | 73.01 | 188500 | 4.5048 |
| 5.05 | 73.2 | 189000 | 4.4400 |
| 5.0306 | 73.39 | 189500 | 4.4209 |
| 5.0648 | 73.59 | 190000 | 4.4707 |
| 5.0097 | 73.78 | 190500 | 4.4453 |
| 5.0611 | 73.97 | 191000 | 4.4601 |
| 5.0091 | 74.17 | 191500 | 4.4231 |
| 5.0529 | 74.36 | 192000 | 4.4110 |
| 5.0221 | 74.55 | 192500 | 4.5013 |
| 5.0156 | 74.75 | 193000 | 4.4717 |
| 5.0442 | 74.94 | 193500 | 4.4585 |
| 5.0229 | 75.14 | 194000 | 4.4601 |
| 4.9883 | 75.33 | 194500 | 4.4740 |
| 4.9963 | 75.52 | 195000 | 4.4663 |
| 4.9886 | 75.72 | 195500 | 4.4237 |
| 4.9753 | 75.91 | 196000 | 4.4762 |
| 4.981 | 76.1 | 196500 | 4.4573 |
| 4.9901 | 76.3 | 197000 | 4.4376 |
| 5.005 | 76.49 | 197500 | 4.4859 |
| 5.0254 | 76.68 | 198000 | 4.4181 |
| 5.0067 | 76.88 | 198500 | 4.4582 |
| 5.0097 | 77.07 | 199000 | 4.4494 |
| 4.9815 | 77.27 | 199500 | 4.4382 |
| 5.0029 | 77.46 | 200000 | 4.4780 |
| 4.9659 | 77.65 | 200500 | 4.4009 |
| 4.9889 | 77.85 | 201000 | 4.3664 |
| 4.9916 | 78.04 | 201500 | 4.4319 |
| 4.9715 | 78.23 | 202000 | 4.4390 |
| 4.9815 | 78.43 | 202500 | 4.4593 |
| 4.972 | 78.62 | 203000 | 4.4620 |
| 5.0164 | 78.81 | 203500 | 4.4247 |
| 4.9608 | 79.01 | 204000 | 4.4031 |
| 4.9606 | 79.2 | 204500 | 4.4301 |
| 4.9922 | 79.4 | 205000 | 4.4147 |
| 4.9825 | 79.59 | 205500 | 4.4489 |
| 4.9719 | 79.78 | 206000 | 4.4155 |
| 4.9663 | 79.98 | 206500 | 4.4514 |
| 4.9663 | 80.17 | 207000 | 4.4439 |
| 4.9351 | 80.36 | 207500 | 4.4235 |
| 5.0248 | 80.56 | 208000 | 4.4122 |
| 4.9836 | 80.75 | 208500 | 4.4261 |
| 4.9881 | 80.95 | 209000 | 4.4228 |
| 5.0021 | 81.14 | 209500 | 4.4588 |
| 4.9508 | 81.33 | 210000 | 4.3826 |
| 4.9729 | 81.53 | 210500 | 4.4254 |
| 4.9746 | 81.72 | 211000 | 4.3951 |
| 4.9771 | 81.91 | 211500 | 4.4301 |
| 4.9988 | 82.11 | 212000 | 4.3889 |
| 5.006 | 82.3 | 212500 | 4.4137 |
| 4.9662 | 82.49 | 213000 | 4.4597 |
| 4.9476 | 82.69 | 213500 | 4.4484 |
| 4.9801 | 82.88 | 214000 | 4.4676 |
| 4.9605 | 83.08 | 214500 | 4.3832 |
| 4.9617 | 83.27 | 215000 | 4.3933 |
| 4.9565 | 83.46 | 215500 | 4.4156 |
| 4.9193 | 83.66 | 216000 | 4.4221 |
| 4.942 | 83.85 | 216500 | 4.4150 |
| 4.9504 | 84.04 | 217000 | 4.4034 |
| 4.9469 | 84.24 | 217500 | 4.4364 |
| 4.9519 | 84.43 | 218000 | 4.4306 |
| 4.9555 | 84.62 | 218500 | 4.3787 |
| 4.9558 | 84.82 | 219000 | 4.4363 |
| 4.94 | 85.01 | 219500 | 4.4151 |
| 4.9441 | 85.21 | 220000 | 4.3747 |
| 4.9654 | 85.4 | 220500 | 4.3779 |
| 4.9352 | 85.59 | 221000 | 4.4293 |
| 4.9743 | 85.79 | 221500 | 4.3823 |
| 4.9536 | 85.98 | 222000 | 4.4049 |
| 4.9426 | 86.17 | 222500 | 4.3719 |
| 4.9363 | 86.37 | 223000 | 4.3414 |
| 4.9093 | 86.56 | 223500 | 4.3717 |
| 4.935 | 86.75 | 224000 | 4.3860 |
| 4.9204 | 86.95 | 224500 | 4.3939 |
| 4.926 | 87.14 | 225000 | 4.4328 |
| 4.9291 | 87.34 | 225500 | 4.4435 |
| 4.9162 | 87.53 | 226000 | 4.4062 |
| 4.9298 | 87.72 | 226500 | 4.3990 |
| 4.9743 | 87.92 | 227000 | 4.4284 |
| 4.9135 | 88.11 | 227500 | 4.3740 |
| 4.9138 | 88.3 | 228000 | 4.3697 |
| 4.9686 | 88.5 | 228500 | 4.3498 |
| 4.9263 | 88.69 | 229000 | 4.3457 |
| 4.9453 | 88.88 | 229500 | 4.3315 |
| 4.9329 | 89.08 | 230000 | 4.3874 |
| 4.9277 | 89.27 | 230500 | 4.3627 |
| 4.8942 | 89.47 | 231000 | 4.3674 |
| 4.9496 | 89.66 | 231500 | 4.4107 |
| 4.924 | 89.85 | 232000 | 4.3855 |
| 4.9825 | 90.05 | 232500 | 4.3674 |
| 4.9365 | 90.24 | 233000 | 4.3662 |
| 4.9123 | 90.43 | 233500 | 4.3669 |
| 4.9555 | 90.63 | 234000 | 4.3668 |
| 4.9394 | 90.82 | 234500 | 4.3677 |
| 4.9672 | 91.01 | 235000 | 4.3339 |
| 4.9493 | 91.21 | 235500 | 4.3554 |
| 4.9114 | 91.4 | 236000 | 4.3507 |
| 4.9374 | 91.6 | 236500 | 4.3447 |
| 4.9288 | 91.79 | 237000 | 4.3988 |
| 4.9156 | 91.98 | 237500 | 4.3785 |
| 4.9226 | 92.18 | 238000 | 4.3322 |
| 4.9223 | 92.37 | 238500 | 4.3461 |
| 4.9051 | 92.56 | 239000 | 4.3603 |
| 4.9341 | 92.76 | 239500 | 4.4139 |
| 4.9285 | 92.95 | 240000 | 4.3757 |
| 4.9506 | 93.14 | 240500 | 4.3456 |
| 4.92 | 93.34 | 241000 | 4.3492 |
| 4.9027 | 93.53 | 241500 | 4.3982 |
| 4.9366 | 93.73 | 242000 | 4.3651 |
| 4.9072 | 93.92 | 242500 | 4.3186 |
| 4.9441 | 94.11 | 243000 | 4.3560 |
| 4.874 | 94.31 | 243500 | 4.3749 |
| 4.9246 | 94.5 | 244000 | 4.3345 |
| 4.8971 | 94.69 | 244500 | 4.3497 |
| 4.9234 | 94.89 | 245000 | 4.4110 |
| 4.9396 | 95.08 | 245500 | 4.3645 |
| 4.8943 | 95.27 | 246000 | 4.3204 |
| 4.9194 | 95.47 | 246500 | 4.4034 |
| 4.914 | 95.66 | 247000 | 4.3936 |
| 4.9376 | 95.86 | 247500 | 4.3477 |
| 4.9042 | 96.05 | 248000 | 4.4062 |
| 4.8946 | 96.24 | 248500 | 4.4115 |
| 4.8959 | 96.44 | 249000 | 4.3983 |
| 4.9408 | 96.63 | 249500 | 4.3633 |
| 4.9039 | 96.82 | 250000 | 4.3486 |
| 4.9368 | 97.02 | 250500 | 4.3819 |
| 4.8793 | 97.21 | 251000 | 4.3586 |
| 4.9069 | 97.41 | 251500 | 4.3666 |
| 4.9339 | 97.6 | 252000 | 4.3911 |
| 4.9086 | 97.79 | 252500 | 4.3505 |
| 4.9132 | 97.99 | 253000 | 4.3878 |
| 4.9279 | 98.18 | 253500 | 4.3422 |
| 4.8955 | 98.37 | 254000 | 4.3913 |
| 4.8874 | 98.57 | 254500 | 4.3560 |
| 4.9026 | 98.76 | 255000 | 4.3189 |
| 4.9008 | 98.95 | 255500 | 4.4185 |
| 4.9023 | 99.15 | 256000 | 4.3197 |
| 4.8792 | 99.34 | 256500 | 4.3112 |
| 4.9193 | 99.54 | 257000 | 4.3886 |
| 4.9136 | 99.73 | 257500 | 4.3596 |
| 4.8953 | 99.92 | 258000 | 4.3615 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
DCU-NLP/electra-base-irish-cased-generator-v1 | [
"pytorch",
"electra",
"fill-mask",
"ga",
"transformers",
"irish",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2022-11-15T19:29:26Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: diabtest-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diabtest-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.2+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Darkecho789/email-gen | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DarkestSky/distilbert-base-uncased-finetuned-ner | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T21:25:49Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 60 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 600,
"warmup_steps": 60,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Darkrider/covidbert_mednli | [
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: farsi_lastname_classifier_bert
results: []
---
# farsi_lastname_classifier_bert
This model is trained to classify Iranian last names.
To use it, type a last name in the space provided on the right and then click on "compute".
The model computes probability of the last name being Persian.
The compute takes a few seconds to load for the first try (because it needs to load the model first). Subsequent attempt should take only milliseconds.
In practice the model can compute the results for an entire batch of data (last names) in a fraction of a second.
It achieves the following results on the evaluation set:
- Loss: 0.0863
- Accuracy: 0.976
## Model description
Model is based on Bert ("bert-base-cased")
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 12 | 0.6325 | 0.588 |
| No log | 2.0 | 24 | 0.3414 | 0.952 |
| No log | 3.0 | 36 | 0.2496 | 0.97 |
| No log | 4.0 | 48 | 0.1674 | 0.976 |
| No log | 5.0 | 60 | 0.1160 | 0.976 |
| No log | 6.0 | 72 | 0.0917 | 0.972 |
| No log | 7.0 | 84 | 0.0896 | 0.974 |
| No log | 8.0 | 96 | 0.0874 | 0.974 |
| No log | 9.0 | 108 | 0.0869 | 0.974 |
| No log | 10.0 | 120 | 0.0863 | 0.976 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Darren/darren | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T21:32:50Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DataikuNLP/distiluse-base-multilingual-cased-v1 | [
"pytorch",
"distilbert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
language:
- en
- hi
- multilingual
license: cc-by-sa-4.0
---
# en-hi-codemixed
This is a masked language model, based on the CamemBERT model architecture.
en-hi-codemixed model was trained from scratch on English, Hindi, and codemixed English-Hindi
corpora for 40 epochs.
The corpora used consists of primarily web crawled data, including codemixed tweets, and focuses on conversational
language and covid-19 pandemic.
|
DavidAMcIntosh/DialoGPT-small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-15T22:19:31Z | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# NAT (mini variant)
NAT-Mini trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
NAT is a hierarchical vision transformer based on Neighborhood Attention (NA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA is a sliding-window attention patterns, and as a result is highly flexible and maintains translational equivariance.
NA is implemented in PyTorch implementations through its extension, [NATTEN](https://github.com/SHI-Labs/NATTEN/).

[Source](https://paperswithcode.com/paper/neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=nat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, NatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/nat-mini-in1k-224")
model = NatForImageClassification.from_pretrained("shi-labs/nat-mini-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/nat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022neighborhood,
title = {Neighborhood Attention Transformer},
author = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2204.07143},
eprint = {2204.07143},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
DavidAMcIntosh/small-rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git).
The original checkpoint is avaliable at [princeton-nlp/efficient_mlm_m0.15](https://huggingface.co/princeton-nlp/efficient_mlm_m0.15). Unfortunately this checkpoint depends on code that isn't part of the official `transformers`
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the `RobertaPreLayerNorm` model from the `transformers`
library.
|
Davlan/xlm-roberta-base-finetuned-luganda | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2022-11-16T01:15:27Z | This model is trained by NLP_team for the Advanced NLP course, 2022.
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
|
Declan/Breitbart_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
metrics:
- wer
model-index:
- name: hubert_zeroth_gpu_freeze
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: zeroth_korean_asr
type: zeroth_korean_asr
config: clean
split: train
args: clean
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_zeroth_gpu_freeze
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8310
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 26.2877 | 0.14 | 100 | 10.6810 | 1.0 |
| 6.4696 | 0.29 | 200 | 4.8799 | 1.0 |
| 4.841 | 0.43 | 300 | 4.8521 | 1.0 |
| 4.8366 | 0.57 | 400 | 4.8736 | 1.0 |
| 4.8311 | 0.72 | 500 | 4.8559 | 1.0 |
| 4.8383 | 0.86 | 600 | 4.8601 | 1.0 |
| 4.8288 | 1.01 | 700 | 4.8474 | 1.0 |
| 4.8283 | 1.15 | 800 | 4.8436 | 1.0 |
| 4.8283 | 1.29 | 900 | 4.8440 | 1.0 |
| 4.8299 | 1.44 | 1000 | 4.8518 | 1.0 |
| 4.8274 | 1.58 | 1100 | 4.8406 | 1.0 |
| 4.8308 | 1.72 | 1200 | 4.8384 | 1.0 |
| 4.8316 | 1.87 | 1300 | 4.8427 | 1.0 |
| 4.8298 | 2.01 | 1400 | 4.8423 | 1.0 |
| 4.8291 | 2.16 | 1500 | 4.8481 | 1.0 |
| 4.8326 | 2.3 | 1600 | 4.8426 | 1.0 |
| 4.83 | 2.44 | 1700 | 4.8362 | 1.0 |
| 4.8286 | 2.59 | 1800 | 4.8424 | 1.0 |
| 4.8269 | 2.73 | 1900 | 4.8362 | 1.0 |
| 4.8234 | 2.87 | 2000 | 4.8452 | 1.0 |
| 4.8179 | 3.02 | 2100 | 4.8416 | 1.0 |
| 4.825 | 3.16 | 2200 | 4.8519 | 1.0 |
| 4.8185 | 3.3 | 2300 | 4.8384 | 1.0 |
| 4.827 | 3.45 | 2400 | 4.8519 | 1.0 |
| 4.8316 | 3.59 | 2500 | 4.8467 | 1.0 |
| 4.825 | 3.74 | 2600 | 4.8465 | 1.0 |
| 4.8246 | 3.88 | 2700 | 4.8422 | 1.0 |
| 4.8228 | 4.02 | 2800 | 4.8326 | 1.0 |
| 4.8277 | 4.17 | 2900 | 4.8353 | 1.0 |
| 4.822 | 4.31 | 3000 | 4.8349 | 1.0 |
| 4.82 | 4.45 | 3100 | 4.8395 | 1.0 |
| 4.8252 | 4.6 | 3200 | 4.8350 | 1.0 |
| 4.8283 | 4.74 | 3300 | 4.8377 | 1.0 |
| 4.8229 | 4.89 | 3400 | 4.8344 | 1.0 |
| 4.8264 | 5.03 | 3500 | 4.8352 | 1.0 |
| 4.8237 | 5.17 | 3600 | 4.8337 | 1.0 |
| 4.8271 | 5.32 | 3700 | 4.8385 | 1.0 |
| 4.8332 | 5.46 | 3800 | 4.8392 | 1.0 |
| 4.8189 | 5.6 | 3900 | 4.8353 | 1.0 |
| 4.8209 | 5.75 | 4000 | 4.8355 | 1.0 |
| 4.8179 | 5.89 | 4100 | 4.8297 | 1.0 |
| 4.821 | 6.03 | 4200 | 4.8505 | 1.0 |
| 4.8243 | 6.18 | 4300 | 4.8371 | 1.0 |
| 4.8224 | 6.32 | 4400 | 4.8378 | 1.0 |
| 4.8261 | 6.47 | 4500 | 4.8368 | 1.0 |
| 4.8233 | 6.61 | 4600 | 4.8326 | 1.0 |
| 4.8252 | 6.75 | 4700 | 4.8364 | 1.0 |
| 4.8247 | 6.9 | 4800 | 4.8438 | 1.0 |
| 4.8139 | 7.04 | 4900 | 4.8435 | 1.0 |
| 4.8204 | 7.18 | 5000 | 4.8398 | 1.0 |
| 4.8197 | 7.33 | 5100 | 4.8382 | 1.0 |
| 4.82 | 7.47 | 5200 | 4.8371 | 1.0 |
| 4.8266 | 7.61 | 5300 | 4.8431 | 1.0 |
| 4.826 | 7.76 | 5400 | 4.8390 | 1.0 |
| 4.8216 | 7.9 | 5500 | 4.8381 | 1.0 |
| 4.82 | 8.05 | 5600 | 4.8339 | 1.0 |
| 4.8281 | 8.19 | 5700 | 4.8316 | 1.0 |
| 4.8246 | 8.33 | 5800 | 4.8361 | 1.0 |
| 4.8169 | 8.48 | 5900 | 4.8338 | 1.0 |
| 4.8175 | 8.62 | 6000 | 4.8341 | 1.0 |
| 4.8283 | 8.76 | 6100 | 4.8358 | 1.0 |
| 4.8232 | 8.91 | 6200 | 4.8356 | 1.0 |
| 4.8193 | 9.05 | 6300 | 4.8325 | 1.0 |
| 4.8146 | 9.2 | 6400 | 4.8297 | 1.0 |
| 4.8207 | 9.34 | 6500 | 4.8283 | 1.0 |
| 4.8221 | 9.48 | 6600 | 4.8334 | 1.0 |
| 4.8229 | 9.63 | 6700 | 4.8308 | 1.0 |
| 4.8239 | 9.77 | 6800 | 4.8352 | 1.0 |
| 4.8245 | 9.91 | 6900 | 4.8314 | 1.0 |
| 4.8173 | 10.06 | 7000 | 4.8300 | 1.0 |
| 4.8189 | 10.2 | 7100 | 4.8341 | 1.0 |
| 4.8209 | 10.34 | 7200 | 4.8287 | 1.0 |
| 4.823 | 10.49 | 7300 | 4.8320 | 1.0 |
| 4.8226 | 10.63 | 7400 | 4.8273 | 1.0 |
| 4.8241 | 10.78 | 7500 | 4.8308 | 1.0 |
| 4.8177 | 10.92 | 7600 | 4.8316 | 1.0 |
| 4.8235 | 11.06 | 7700 | 4.8274 | 1.0 |
| 4.8188 | 11.21 | 7800 | 4.8290 | 1.0 |
| 4.8183 | 11.35 | 7900 | 4.8355 | 1.0 |
| 4.8226 | 11.49 | 8000 | 4.8312 | 1.0 |
| 4.8209 | 11.64 | 8100 | 4.8307 | 1.0 |
| 4.8208 | 11.78 | 8200 | 4.8300 | 1.0 |
| 4.8221 | 11.93 | 8300 | 4.8281 | 1.0 |
| 4.82 | 12.07 | 8400 | 4.8306 | 1.0 |
| 4.8199 | 12.21 | 8500 | 4.8343 | 1.0 |
| 4.8212 | 12.36 | 8600 | 4.8314 | 1.0 |
| 4.8212 | 12.5 | 8700 | 4.8309 | 1.0 |
| 4.8228 | 12.64 | 8800 | 4.8310 | 1.0 |
| 4.8225 | 12.79 | 8900 | 4.8325 | 1.0 |
| 4.8146 | 12.93 | 9000 | 4.8364 | 1.0 |
| 4.8174 | 13.07 | 9100 | 4.8328 | 1.0 |
| 4.816 | 13.22 | 9200 | 4.8338 | 1.0 |
| 4.822 | 13.36 | 9300 | 4.8378 | 1.0 |
| 4.8253 | 13.51 | 9400 | 4.8411 | 1.0 |
| 4.8173 | 13.65 | 9500 | 4.8379 | 1.0 |
| 4.8227 | 13.79 | 9600 | 4.8374 | 1.0 |
| 4.8138 | 13.94 | 9700 | 4.8372 | 1.0 |
| 4.8191 | 14.08 | 9800 | 4.8327 | 1.0 |
| 4.8259 | 14.22 | 9900 | 4.8335 | 1.0 |
| 4.8098 | 14.37 | 10000 | 4.8301 | 1.0 |
| 4.8248 | 14.51 | 10100 | 4.8315 | 1.0 |
| 4.8199 | 14.66 | 10200 | 4.8304 | 1.0 |
| 4.8202 | 14.8 | 10300 | 4.8312 | 1.0 |
| 4.8159 | 14.94 | 10400 | 4.8316 | 1.0 |
| 4.8181 | 15.09 | 10500 | 4.8306 | 1.0 |
| 4.8217 | 15.23 | 10600 | 4.8350 | 1.0 |
| 4.8095 | 15.37 | 10700 | 4.8328 | 1.0 |
| 4.8249 | 15.52 | 10800 | 4.8329 | 1.0 |
| 4.8178 | 15.66 | 10900 | 4.8355 | 1.0 |
| 4.8192 | 15.8 | 11000 | 4.8342 | 1.0 |
| 4.8249 | 15.95 | 11100 | 4.8366 | 1.0 |
| 4.8096 | 16.09 | 11200 | 4.8385 | 1.0 |
| 4.8196 | 16.24 | 11300 | 4.8390 | 1.0 |
| 4.8271 | 16.38 | 11400 | 4.8352 | 1.0 |
| 4.8166 | 16.52 | 11500 | 4.8371 | 1.0 |
| 4.8206 | 16.67 | 11600 | 4.8348 | 1.0 |
| 4.817 | 16.81 | 11700 | 4.8347 | 1.0 |
| 4.8165 | 16.95 | 11800 | 4.8386 | 1.0 |
| 4.8159 | 17.1 | 11900 | 4.8376 | 1.0 |
| 4.8202 | 17.24 | 12000 | 4.8374 | 1.0 |
| 4.8157 | 17.39 | 12100 | 4.8370 | 1.0 |
| 4.8175 | 17.53 | 12200 | 4.8405 | 1.0 |
| 4.8189 | 17.67 | 12300 | 4.8321 | 1.0 |
| 4.8167 | 17.82 | 12400 | 4.8322 | 1.0 |
| 4.8229 | 17.96 | 12500 | 4.8353 | 1.0 |
| 4.8179 | 18.1 | 12600 | 4.8322 | 1.0 |
| 4.8183 | 18.25 | 12700 | 4.8379 | 1.0 |
| 4.8151 | 18.39 | 12800 | 4.8375 | 1.0 |
| 4.8211 | 18.53 | 12900 | 4.8355 | 1.0 |
| 4.8241 | 18.68 | 13000 | 4.8352 | 1.0 |
| 4.8185 | 18.82 | 13100 | 4.8350 | 1.0 |
| 4.8175 | 18.97 | 13200 | 4.8352 | 1.0 |
| 4.8094 | 19.11 | 13300 | 4.8337 | 1.0 |
| 4.8149 | 19.25 | 13400 | 4.8344 | 1.0 |
| 4.8131 | 19.4 | 13500 | 4.8386 | 1.0 |
| 4.8227 | 19.54 | 13600 | 4.8350 | 1.0 |
| 4.8175 | 19.68 | 13700 | 4.8325 | 1.0 |
| 4.8204 | 19.83 | 13800 | 4.8344 | 1.0 |
| 4.8228 | 19.97 | 13900 | 4.8322 | 1.0 |
| 4.8177 | 20.11 | 14000 | 4.8365 | 1.0 |
| 4.824 | 20.26 | 14100 | 4.8338 | 1.0 |
| 4.8151 | 20.4 | 14200 | 4.8342 | 1.0 |
| 4.8189 | 20.55 | 14300 | 4.8339 | 1.0 |
| 4.8115 | 20.69 | 14400 | 4.8325 | 1.0 |
| 4.8162 | 20.83 | 14500 | 4.8291 | 1.0 |
| 4.8182 | 20.98 | 14600 | 4.8321 | 1.0 |
| 4.8189 | 21.12 | 14700 | 4.8314 | 1.0 |
| 4.8123 | 21.26 | 14800 | 4.8318 | 1.0 |
| 4.8165 | 21.41 | 14900 | 4.8320 | 1.0 |
| 4.8247 | 21.55 | 15000 | 4.8315 | 1.0 |
| 4.8165 | 21.7 | 15100 | 4.8311 | 1.0 |
| 4.8151 | 21.84 | 15200 | 4.8352 | 1.0 |
| 4.8234 | 21.98 | 15300 | 4.8298 | 1.0 |
| 4.8136 | 22.13 | 15400 | 4.8282 | 1.0 |
| 4.8179 | 22.27 | 15500 | 4.8297 | 1.0 |
| 4.8128 | 22.41 | 15600 | 4.8307 | 1.0 |
| 4.8216 | 22.56 | 15700 | 4.8290 | 1.0 |
| 4.8177 | 22.7 | 15800 | 4.8286 | 1.0 |
| 4.8209 | 22.84 | 15900 | 4.8311 | 1.0 |
| 4.8183 | 22.99 | 16000 | 4.8276 | 1.0 |
| 4.8135 | 23.13 | 16100 | 4.8284 | 1.0 |
| 4.8116 | 23.28 | 16200 | 4.8279 | 1.0 |
| 4.8161 | 23.42 | 16300 | 4.8291 | 1.0 |
| 4.8202 | 23.56 | 16400 | 4.8292 | 1.0 |
| 4.8199 | 23.71 | 16500 | 4.8298 | 1.0 |
| 4.8203 | 23.85 | 16600 | 4.8293 | 1.0 |
| 4.8177 | 23.99 | 16700 | 4.8286 | 1.0 |
| 4.8153 | 24.14 | 16800 | 4.8273 | 1.0 |
| 4.8202 | 24.28 | 16900 | 4.8260 | 1.0 |
| 4.8189 | 24.43 | 17000 | 4.8289 | 1.0 |
| 4.8219 | 24.57 | 17100 | 4.8279 | 1.0 |
| 4.8148 | 24.71 | 17200 | 4.8284 | 1.0 |
| 4.8113 | 24.86 | 17300 | 4.8286 | 1.0 |
| 4.8133 | 25.0 | 17400 | 4.8299 | 1.0 |
| 4.8164 | 25.14 | 17500 | 4.8309 | 1.0 |
| 4.8231 | 25.29 | 17600 | 4.8279 | 1.0 |
| 4.8135 | 25.43 | 17700 | 4.8296 | 1.0 |
| 4.8118 | 25.57 | 17800 | 4.8293 | 1.0 |
| 4.8139 | 25.72 | 17900 | 4.8279 | 1.0 |
| 4.8144 | 25.86 | 18000 | 4.8281 | 1.0 |
| 4.8207 | 26.01 | 18100 | 4.8284 | 1.0 |
| 4.8096 | 26.15 | 18200 | 4.8285 | 1.0 |
| 4.8177 | 26.29 | 18300 | 4.8275 | 1.0 |
| 4.8221 | 26.44 | 18400 | 4.8288 | 1.0 |
| 4.8147 | 26.58 | 18500 | 4.8281 | 1.0 |
| 4.8148 | 26.72 | 18600 | 4.8281 | 1.0 |
| 4.819 | 26.87 | 18700 | 4.8282 | 1.0 |
| 4.8138 | 27.01 | 18800 | 4.8297 | 1.0 |
| 4.8094 | 27.16 | 18900 | 4.8291 | 1.0 |
| 4.8236 | 27.3 | 19000 | 4.8288 | 1.0 |
| 4.8208 | 27.44 | 19100 | 4.8292 | 1.0 |
| 4.816 | 27.59 | 19200 | 4.8279 | 1.0 |
| 4.8103 | 27.73 | 19300 | 4.8290 | 1.0 |
| 4.8152 | 27.87 | 19400 | 4.8296 | 1.0 |
| 4.8158 | 28.02 | 19500 | 4.8304 | 1.0 |
| 4.8122 | 28.16 | 19600 | 4.8293 | 1.0 |
| 4.8199 | 28.3 | 19700 | 4.8293 | 1.0 |
| 4.8185 | 28.45 | 19800 | 4.8287 | 1.0 |
| 4.8198 | 28.59 | 19900 | 4.8294 | 1.0 |
| 4.8102 | 28.74 | 20000 | 4.8291 | 1.0 |
| 4.8168 | 28.88 | 20100 | 4.8290 | 1.0 |
| 4.8117 | 29.02 | 20200 | 4.8303 | 1.0 |
| 4.8156 | 29.17 | 20300 | 4.8295 | 1.0 |
| 4.8127 | 29.31 | 20400 | 4.8298 | 1.0 |
| 4.8193 | 29.45 | 20500 | 4.8301 | 1.0 |
| 4.8174 | 29.6 | 20600 | 4.8301 | 1.0 |
| 4.8167 | 29.74 | 20700 | 4.8301 | 1.0 |
| 4.8137 | 29.89 | 20800 | 4.8310 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.0.0
- Tokenizers 0.13.2
|
Declan/Breitbart_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
---
### Jeffzo3 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by JeffZ
This your the Stable Diffusion model fine-tuned the Jeffzo3 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: ****
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
|
Declan/Breitbart_modelv7 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-kazakh-16K-af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-kazakh-16K-af
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.0+cu117
- Datasets 1.13.3
- Tokenizers 0.10.3
|
Declan/ChicagoTribune_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-16112022-cert
co2_eq_emissions:
emissions: 0.08699410121541305
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2114268313
- CO2 Emissions (in grams): 0.0870
## Validation Metrics
- Loss: 0.003
- Accuracy: 0.999
- Precision: 0.987
- Recall: 0.986
- F1: 0.986
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-16112022-cert-2114268313
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-16112022-cert-2114268313", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-16112022-cert-2114268313", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Declan/FoxNews_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: openrail
---
This is the fine-tuned Stable Diffusion model trained on black and white films by Danil Matvienko.
Use **bwWinter** in your prompts.

.png) |
Declan/FoxNews_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git).
The original checkpoint is avaliable at [princeton-nlp/efficient_mlm_m0.15-801010](https://huggingface.co/princeton-nlp/efficient_mlm_m0.15-801010). Unfortunately this checkpoint depends on code that isn't part of the official `transformers`
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the `RobertaPreLayerNorm` model from the `transformers`
library.
|
Declan/FoxNews_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- vision
- depth-estimation
- generated_from_trainer
model-index:
- name: glpn-nyu-finetuned-diode-221116-062619
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221116-062619
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5480
- Rmse: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| 1.6855 | 1.0 | 72 | 1.3740 | nan |
| 1.3941 | 2.0 | 144 | 0.9261 | nan |
| 0.7567 | 3.0 | 216 | 0.6298 | nan |
| 0.6331 | 4.0 | 288 | 0.6080 | nan |
| 0.6029 | 5.0 | 360 | 0.6025 | nan |
| 0.5607 | 6.0 | 432 | 0.5777 | nan |
| 0.5333 | 7.0 | 504 | 0.5553 | nan |
| 0.5018 | 8.0 | 576 | 0.5648 | nan |
| 0.497 | 9.0 | 648 | 0.5552 | nan |
| 0.4838 | 10.0 | 720 | 0.5539 | nan |
| 0.4557 | 11.0 | 792 | 0.5468 | nan |
| 0.4689 | 12.0 | 864 | 0.5484 | nan |
| 0.4735 | 13.0 | 936 | 0.5459 | nan |
| 0.4546 | 14.0 | 1008 | 0.5468 | nan |
| 0.4608 | 15.0 | 1080 | 0.5480 | nan |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Tokenizers 0.13.2
|
Declan/FoxNews_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
---
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
|
Declan/HuffPost_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
inference: false
---
This is a model checkpoint for ["Should You Mask 15% in Masked Language Modeling"](https://arxiv.org/abs/2202.08005) [(code)](https://github.com/princeton-nlp/DinkyTrain.git).
The original checkpoint is avaliable at [princeton-nlp/efficient_mlm_m0.40-801010](https://huggingface.co/princeton-nlp/efficient_mlm_m0.40-801010). Unfortunately this checkpoint depends on code that isn't part of the official `transformers`
library. Additionally, the checkpoints contains unused weights due to a bug.
This checkpoint fixes the unused weights issue and uses the `RobertaPreLayerNorm` model from the `transformers`
library.
|
Declan/HuffPost_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- text-to-image
widget:
- text: sks_mia
---
### mia from [lost nova](https://store.steampowered.com/app/1603410) on waifu diffusion via Dreambooth
#### model by no3
This your the waifu diffusion model fine-tuned the mia-wd-1.5-beta1 concept taught to waifu diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks_mia**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts).
### note
If you want to convert diffusers to checkpoint ".ckpt" to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt file, Use this [script](https://gist.github.com/Christopher-Hayes/636ba25e0ae2e7020722d5386ac2571b)
If you have issues or questions feel free to visit the Community Tab and start discussion about it. |
Declan/HuffPost_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-16T06:55:45Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: phoBert-514
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# phoBert-514
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
Declan/Politico_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
library_name: paddlenlp
language:
- zh
---
[](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/ernie-3.0-nano-zh
## Intro
[ERNIE 3.0 Models](https://github.com/paddlepaddle/PaddleNLP/tree/develop/model_zoo/ernie-3.0) are lightweight models obtained from Wenxin large model ERNIE 3.0 using distillation technology. The model structure is consistent with ERNIE 2.0, and has a stronger Chinese effect than ERNIE 2.0.
For a detailed explanation of related technologies, please refer to the article [_解析全球最大中文单体模型鹏城-百度·文心技术细节_](https://www.jiqizhixin.com/articles/2021-12-08-9)
## How to Use
Click on the "Use in paddlenlp" on the top right corner!
## Performance
ERNIE 3.0 open sources six models: **ERNIE 3.0 _XBase_**, **ERNIE 3.0 _Base_**, **ERNIE 3.0 _Medium_**, **ERNIE 3.0 _Mini_**, **ERNIE 3.0 _Micro_**, **ERNIE 3.0 _Nano_**:
- **ERNIE 3.0-_XBase_** (_20-layer, 1024-hidden, 16-heads_)
- **ERNIE 3.0-_Base_** (_12-layer, 768-hidden, 12-heads_)
- **ERNIE 3.0-_Medium_** (_6-layer, 768-hidden, 12-heads_)
- **ERNIE 3.0-_Mini_** (_6-layer, 384-hidden, 12-heads_)
- **ERNIE 3.0-_Micro_** (_4-layer, 384-hidden, 12-heads_)
- **ERNIE 3.0-_Nano_** (_4-layer, 312-hidden, 12-heads_)
Below is the **precision-latency graph** of the small Chinese models in PaddleNLP. The abscissa represents the latency (unit: ms) tested on CLUE IFLYTEK dataset (maximum sequence length is set to 128), and the ordinate is the average accuracy on 10 CLUE tasks (including text classification, text matching, natural language inference, Pronoun disambiguation, machine reading comprehension and other tasks), among which the metric of CMRC2018 is Exact Match (EM), and the metric of other tasks is Accuracy. The closer the model to the top left in the figure, the higher the level of accuracy and performance.The top left model in the figure has the highest level of accuracy and performance.
The number of parameters of the model are marked under the model name in the figure. For the test environment, see [Performance Test](https://github.com/paddlepaddle/PaddleNLP/tree/develop/model_zoo/ernie-3.0#%E6%80%A7%E8%83%BD%E6%B5%8B%E8%AF%95) in details.
precision-latency graph under CPU (number of threads: 1 and 8), batch_size = 32:
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852121-2798b5c9-d122-4ac0-b4c8-da46b89b5512.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852129-bbe58835-8eec-45d5-a4a9-cc2cf9a3db6a.png"></a></td>
</tr>
</table>
precision-latency graph under CPU (number of threads: 1 and 8), batch_size = 1:
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852106-658e18e7-705b-4f53-bad0-027281163ae3.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852112-4b89d675-7c95-4d75-84b6-db5a6ea95e2c.png"></a></td>
</tr>
</table>
precision-latency graph under GPU, batch_size = 32, 1:
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175854679-3247f42e-8716-4a36-b5c6-9ce4661b36c7.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175854670-57878b34-c213-47ac-b620-aaaec082f435.png"></a></td>
</tr>
</table>
As can be seen from the figure, the comprehensive performance of the ERNIE Tiny 3.0 models has been comprehensively ahead of UER-py, Huawei-Noah and HFL in terms of accuracy and performance. And when batch_size=1 and the precision mode is FP16, the inference performance of the wide and shallow model on the GPU is more advantageous.
The precision data on the CLUE **validation set** are shown in the following table:
<table style="width:100%;" cellpadding="2" cellspacing="0" border="1" bordercolor="#000000">
<tbody>
<tr>
<td style="text-align:center;vertical-align:middle">
<span style="font-size:18px;">Arch</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">Model</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">AVG</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">AFQMC</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">TNEWS</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">IFLYTEK</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CMNLI</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">OCNLI</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CLUEWSC2020</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CSL</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CMRC2018</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CHID</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">C<sup>3</sup></span>
</td>
</tr>
<tr>
<td rowspan=3 align=center> 24L1024H </td>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 1.0-Large-cw</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.03</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.97</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.65</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>85.09</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>81.73</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>93.09</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.53</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>74.22/91.88</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>88.57</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.54</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 2.0-Large-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.23</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>59.33</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.85</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">89.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.23</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.95/90.31</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">86.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.12</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoBERTa-wwm-ext-large</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.61</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.88</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.81</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">90.79</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.58/89.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">85.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.26</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 20L1024H </td>
<td style="text-align:center">
<span style="font-size:18px"><b>ERNIE 3.0-Xbase-zh</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>78.39</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.16</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>59.55</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>61.87</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.40</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>81.73</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>88.82</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>83.60</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>75.99/93.00</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>86.78</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.98</b></span>
</td>
</tr>
<tr>
<td rowspan=9 align=center> 12L768H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams">
ERNIE 3.0-Base-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>80.10</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">86.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.71/90.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">84.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>77.88</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 1.0-Base-zh-cw</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.47</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.07</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.86</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>83.41</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.58</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>89.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>83.42</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>72.88/90.78</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.68</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.98</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE-Gram-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.28</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.88</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.87</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">88.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.82/90.38</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">84.04</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.69</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">Langboat/Mengzi-BERT-Base</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.69</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.76</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">88.16</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.04/88.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.74</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.70</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 2.0-Base-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.32</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.65</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.25</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.62</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.08/87.46</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.19</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 1.0-Base-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.84</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>58.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.25</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.68</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.58</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">85.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.32/87.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.68</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoBERTa-wwm-ext</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.60</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.23</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.92</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">88.49</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.39/88.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.43</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.03</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">BERT-Base-Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.57</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.97</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.30/86.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.01</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.38</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Base</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.89</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.62</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.14</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.01</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.58</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.80</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.87/84.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.76</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 8L512H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Medium</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.06</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.10</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.09</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.63/78.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.84</span>
</td>
</tr>
<tr>
<td rowspan=5 align=center> 6L768H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams">
ERNIE 3.0-Medium-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>72.49</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>73.37</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>57.00</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>80.64</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.88</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.28</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>81.60</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>65.83/87.30</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>69.73</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">HLF/RBT6, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.06</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.45</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.36</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.32</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.72/84.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.85</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">TinyBERT<sub>6</sub>, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.62</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.70</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.12</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.03/83.75</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.11</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoFormerV2 Small</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>60.72</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.37</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.97/83.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.66</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.41</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-L6-H768</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.09</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.54</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.49</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.04</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.74/75.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.73</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.40</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 6L384H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams">
ERNIE 3.0-Mini-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.85</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.24</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.19</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.30</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.53/81.97</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.60</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L768H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBT4, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.42</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.34</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.23</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.30/81.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.45</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L512H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Small</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.25</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.21</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.552</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.80</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">46.75/69.69</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.59</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">50.92</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L384H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams">
ERNIE 3.0-Micro-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">64.21</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.15</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.81</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.77/77.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.53</span>
</td>
</tr>
<tr>
<td rowspan=2 align=center> 4L312H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams">
ERNIE 3.0-Nano-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.97</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.51</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>54.57</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>48.36</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>74.97</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.61</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.75</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>75.93</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>52.00/76.35</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>58.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>55.11</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">TinyBERT<sub>4</sub>, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">39.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.94</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.59</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.07</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">46.04/69.34</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">52.18</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L256H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Mini</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.40</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.32</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">41.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.40</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.36</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">5.96/17.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">51.19</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">39.68</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 3L1024H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBTL3, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.14</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.74</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.50/80.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.03</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.56</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 3L768H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBT3, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.73/78.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.93</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 2L128H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Tiny</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">44.45</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">51.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">20.28</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.73</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.43</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">3.08/14.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">23.57</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">28.12</span>
</td>
</tr>
<tbody>
</table>
<br />
## Citation Info
```text
@article{sun2021ernie,
title={Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation},
author={Sun, Yu and Wang, Shuohuan and Feng, Shikun and Ding, Siyu and Pang, Chao and Shang, Junyuan and Liu, Jiaxiang and Chen, Xuyi and Zhao, Yanbin and Lu, Yuxiang and others},
journal={arXiv preprint arXiv:2107.02137},
year={2021}
}
@article{su2021ernie,
title={Ernie-tiny: A progressive distillation framework for pretrained transformer compression},
author={Su, Weiyue and Chen, Xuyi and Feng, Shikun and Liu, Jiaxiang and Liu, Weixin and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng},
journal={arXiv preprint arXiv:2106.02241},
year={2021}
}
@article{wang2021ernie,
title={Ernie 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation},
author={Wang, Shuohuan and Sun, Yu and Xiang, Yang and Wu, Zhihua and Ding, Siyu and Gong, Weibao and Feng, Shikun and Shang, Junyuan and Zhao, Yanbin and Pang, Chao and others},
journal={arXiv preprint arXiv:2112.12731},
year={2021}
}
``` |
DeepBasak/Slack | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Guizmus/SouthParkStyle/resolve/main/showcase_southRick.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
---
# SouthPark Style
<p>
<img alt="Showcase" src="https://huggingface.co/Guizmus/SouthParkStyle/resolve/main/showcase_southRick.jpg"/><br/>
This model was based on <a href="https://huggingface.co/runwayml/stable-diffusion-v1-5">RunwayML 1.5</a> model with updated VAE.<br/>
The dataset is made mostly of South Park characters and scenary, with some regularisation around Rick Roll and Fullbody Shots photography.<br/>
99 total pictures in the dataset, 800 repeats total each, over 16 Epoch on LR1e-6 (around 79k unitary steps).<br/>
This was trained using EveryDream with a full caption of all training pictures.<br/>
<br/>
The style will be called by the use of the token <b>SouthPark Style</b>, and the tokens <b>Rick Roll</b> and <b>FullBody Shot</b> have also been refined.<br/>
<br/>
To access this model, you can download the CKPT file below.
</p>
## Downloads
[2GB CKPT](https://huggingface.co/Guizmus/SouthParkStyle/resolve/main/SouthParkStyle_v1.ckpt)
[11GB CKPT with training optimizers](https://huggingface.co/Guizmus/SouthParkStyle/resolve/main/SouthParkStyle_v1_with_optimizers.ckpt)
[dataset for the first version](https://huggingface.co/Guizmus/SouthParkStyle/resolve/main/dataset.zip)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Guizmus/SouthParkStyle"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Yoda SouthPark Style"
image = pipe(prompt).images[0]
image.save("./SouthParkStyle.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
DeepChem/ChemBERTa-10M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 90 | null | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- abhishekgupta/autotrain-data-question-generation4
co2_eq_emissions:
emissions: 4.8068340904981115
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2116768409
- CO2 Emissions (in grams): 4.8068
## Validation Metrics
- Loss: 1.092
- Rouge1: 32.336
- Rouge2: 15.558
- RougeL: 30.175
- RougeLsum: 30.191
- Gen Len: 14.493
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/abhishekgupta/autotrain-question-generation4-2116768409
``` |
DeepChem/ChemBERTa-77M-MTR | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,169 | null | ---
language:
- en
thumbnail: "https://huggingface.co/netsvetaev/netsvetaev-black/resolve/main/000199.fb94ed7d.3205796735.png"
tags:
- diffusion
- netsvetaev
- dreambooth
- stable-diffusion
- text-to-image
license: "mit"
---
Hello!
This is the model, based on my paintings on a black background and SD 1.5. This is the second onw, trained with 29 images and 2900 steps.
The token is «netsvetaev black style».
Best suited for: abstract seamless patterns, images similar to my original paintings with blue triangles, and large objects like «cat face» or «girl face».
It works well with landscape orientation and embiggen.
It has MIT license, you can use it for free.
Best used with Invoke AI: https://github.com/invoke-ai/InvokeAI (The examples below contain metadata for it)








________________________
Artur Netsvetaev, 2022
https://netsvetaev.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.