modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Darkrider/covidbert_medmarco | [
"pytorch",
"jax",
"bert",
"text-classification",
"arxiv:2010.05987",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 35 | 2022-07-20T08:53:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: L_Roberta3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# L_Roberta3
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.9555
- F1: 0.9555
- Precision: 0.9555
- Recall: 0.9555
- C Report: precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
- C Matrix: None
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | C Report | C Matrix |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------:|
| 0.2674 | 1.0 | 329 | 0.2436 | 0.9389 | 0.9389 | 0.9389 | 0.9389 | precision recall f1-score support
0 0.94 0.95 0.95 876
1 0.94 0.92 0.93 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.1377 | 2.0 | 658 | 0.1506 | 0.9408 | 0.9408 | 0.9408 | 0.9408 | precision recall f1-score support
0 0.97 0.92 0.95 876
1 0.91 0.96 0.94 696
accuracy 0.94 1572
macro avg 0.94 0.94 0.94 1572
weighted avg 0.94 0.94 0.94 1572
| None |
| 0.0898 | 3.0 | 987 | 0.1491 | 0.9548 | 0.9548 | 0.9548 | 0.9548 | precision recall f1-score support
0 0.96 0.96 0.96 876
1 0.95 0.95 0.95 696
accuracy 0.95 1572
macro avg 0.95 0.95 0.95 1572
weighted avg 0.95 0.95 0.95 1572
| None |
| 0.0543 | 4.0 | 1316 | 0.1831 | 0.9561 | 0.9561 | 0.9561 | 0.9561 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.96 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
| 0.0394 | 5.0 | 1645 | 0.2095 | 0.9555 | 0.9555 | 0.9555 | 0.9555 | precision recall f1-score support
0 0.97 0.95 0.96 876
1 0.94 0.97 0.95 696
accuracy 0.96 1572
macro avg 0.95 0.96 0.96 1572
weighted avg 0.96 0.96 0.96 1572
| None |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Darren/darren | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-07-20T09:04:23Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -95.66 +/- 35.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Daryaflp/roberta-retrained_ru_covid | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-07-20T09:20:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-amazon-shoe-reviews-tensorboard
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-amazon-shoe-reviews-tensorboard
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9534
- Accuracy: 0.5779
- F1: [0.63189419 0.46645049 0.50381304 0.55843496 0.73060507]
- Precision: [0.62953754 0.47008547 0.48669202 0.58801498 0.71780957]
- Recall: [0.63426854 0.46287129 0.52218256 0.53168844 0.74386503]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------------------------------------------------------:|:--------------------------------------------------------:|:--------------------------------------------------------:|
| 0.8776 | 1.0 | 2813 | 0.9534 | 0.5779 | [0.63189419 0.46645049 0.50381304 0.55843496 0.73060507] | [0.62953754 0.47008547 0.48669202 0.58801498 0.71780957] | [0.63426854 0.46287129 0.52218256 0.53168844 0.74386503] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Davlan/bert-base-multilingual-cased-finetuned-wolof | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: keras
---
## Model description
BERT-based model for classifying fake news written in Romanian.
## Intended uses & limitations
It predicts one of six types of fake news (in order: "fabricated", "fictional", "plausible", "propaganda", "real", "satire").
It also predicts if the article talks about health or politics.
## How to use the model
Load the model with:
```python
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("pandrei7/fakenews-mtl")
```
Use this tokenizer: `readerbench/RoBERT-base`.
The input length should be 512. You can tokenize the input like this:
```python
tokenizer(
your_text,
padding="max_length",
truncation=True,
max_length=512,
return_tensors="tf",
)
```
## Training data
The model was trained and evaluated on the [fakerom](https://www.tagtog.com/fakerom/fakerom/) dataset.
## Evaluation results
The accuracy of predicting fake news was roughly 75%. |
Davlan/mt5-small-pcm-en | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small_adafactor
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 32.8631
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_adafactor
This model is a fine-tuned version of [oMateos2020/t5-small_adafactor](https://huggingface.co/oMateos2020/t5-small_adafactor) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1167
- Rouge1: 32.8631
- Rouge2: 11.658
- Rougel: 26.6192
- Rougelsum: 26.6224
- Gen Len: 18.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1315 | 0.02 | 200 | 2.1865 | 31.9486 | 10.9605 | 25.7418 | 25.7408 | 18.8466 |
| 2.1297 | 0.05 | 400 | 2.1965 | 31.9598 | 10.9463 | 25.784 | 25.7867 | 18.8525 |
| 2.1284 | 0.07 | 600 | 2.1981 | 32.231 | 11.1003 | 26.0155 | 26.0226 | 18.8466 |
| 2.1315 | 0.09 | 800 | 2.1873 | 31.9161 | 10.8642 | 25.7166 | 25.7273 | 18.8227 |
| 2.1212 | 0.12 | 1000 | 2.1892 | 32.4646 | 11.1852 | 26.2451 | 26.2439 | 18.8259 |
| 2.1028 | 0.14 | 1200 | 2.1978 | 32.2886 | 11.1346 | 26.0795 | 26.0827 | 18.7685 |
| 2.1221 | 0.16 | 1400 | 2.1936 | 32.2901 | 11.0821 | 25.9983 | 26.0024 | 18.7798 |
| 2.1168 | 0.19 | 1600 | 2.1922 | 32.1655 | 11.1451 | 25.986 | 25.9893 | 18.8232 |
| 2.1166 | 0.21 | 1800 | 2.1836 | 32.2611 | 11.174 | 26.0594 | 26.0688 | 18.7633 |
| 2.1053 | 0.24 | 2000 | 2.1929 | 32.3321 | 11.213 | 26.1859 | 26.1903 | 18.7758 |
| 2.1126 | 0.26 | 2200 | 2.1811 | 32.2078 | 11.1792 | 26.0776 | 26.0817 | 18.8197 |
| 2.1038 | 0.28 | 2400 | 2.1836 | 32.2799 | 11.2511 | 26.1191 | 26.1251 | 18.7884 |
| 2.1181 | 0.31 | 2600 | 2.1805 | 32.1197 | 11.1586 | 26.0441 | 26.0441 | 18.8045 |
| 2.1217 | 0.33 | 2800 | 2.1806 | 32.3051 | 11.2638 | 26.1319 | 26.1386 | 18.7886 |
| 2.116 | 0.35 | 3000 | 2.1741 | 32.2799 | 11.1887 | 26.1224 | 26.1363 | 18.7769 |
| 2.1118 | 0.38 | 3200 | 2.1767 | 32.387 | 11.2053 | 26.077 | 26.0845 | 18.8407 |
| 2.1164 | 0.4 | 3400 | 2.1743 | 32.5008 | 11.4021 | 26.3291 | 26.3297 | 18.7731 |
| 2.1068 | 0.42 | 3600 | 2.1673 | 32.2347 | 11.1676 | 26.0657 | 26.0662 | 18.817 |
| 2.1276 | 0.45 | 3800 | 2.1664 | 32.2434 | 11.2862 | 26.094 | 26.0994 | 18.7713 |
| 2.1313 | 0.47 | 4000 | 2.1636 | 32.694 | 11.3724 | 26.4071 | 26.4008 | 18.7709 |
| 2.1229 | 0.49 | 4200 | 2.1633 | 32.456 | 11.4057 | 26.2733 | 26.2689 | 18.7586 |
| 2.129 | 0.52 | 4400 | 2.1641 | 32.309 | 11.2133 | 26.1062 | 26.1121 | 18.7729 |
| 2.1425 | 0.54 | 4600 | 2.1577 | 32.5879 | 11.4001 | 26.3045 | 26.3078 | 18.8104 |
| 2.1536 | 0.56 | 4800 | 2.1507 | 32.5152 | 11.4035 | 26.3054 | 26.3116 | 18.7941 |
| 2.148 | 0.59 | 5000 | 2.1503 | 32.8088 | 11.5641 | 26.5346 | 26.5311 | 18.7602 |
| 2.1541 | 0.61 | 5200 | 2.1491 | 32.8185 | 11.5816 | 26.5261 | 26.527 | 18.7654 |
| 2.155 | 0.64 | 5400 | 2.1466 | 32.7229 | 11.5339 | 26.4363 | 26.442 | 18.8404 |
| 2.1579 | 0.66 | 5600 | 2.1435 | 32.884 | 11.6042 | 26.5862 | 26.5891 | 18.7713 |
| 2.1601 | 0.68 | 5800 | 2.1393 | 32.8027 | 11.5328 | 26.4521 | 26.4567 | 18.7904 |
| 2.1765 | 0.71 | 6000 | 2.1393 | 32.8059 | 11.5751 | 26.5499 | 26.5551 | 18.7768 |
| 2.2176 | 0.73 | 6200 | 2.1345 | 33.0734 | 11.8056 | 26.7546 | 26.7607 | 18.7756 |
| 2.2126 | 0.75 | 6400 | 2.1328 | 32.7478 | 11.5925 | 26.5333 | 26.5359 | 18.7819 |
| 2.1916 | 0.78 | 6600 | 2.1298 | 32.658 | 11.491 | 26.379 | 26.3869 | 18.8101 |
| 2.2162 | 0.8 | 6800 | 2.1297 | 32.7843 | 11.5629 | 26.4736 | 26.4728 | 18.8187 |
| 2.2358 | 0.82 | 7000 | 2.1287 | 32.9181 | 11.6378 | 26.5966 | 26.5987 | 18.8039 |
| 2.2371 | 0.85 | 7200 | 2.1265 | 32.8413 | 11.674 | 26.5905 | 26.5831 | 18.7962 |
| 2.256 | 0.87 | 7400 | 2.1245 | 32.7412 | 11.5627 | 26.4976 | 26.503 | 18.7728 |
| 2.2566 | 0.89 | 7600 | 2.1220 | 32.8165 | 11.6069 | 26.5301 | 26.5295 | 18.7871 |
| 2.2954 | 0.92 | 7800 | 2.1197 | 32.7399 | 11.5417 | 26.4914 | 26.4938 | 18.7752 |
| 2.2766 | 0.94 | 8000 | 2.1187 | 32.853 | 11.6411 | 26.5909 | 26.5938 | 18.7852 |
| 2.3273 | 0.96 | 8200 | 2.1169 | 32.9376 | 11.709 | 26.6665 | 26.6672 | 18.7734 |
| 2.3182 | 0.99 | 8400 | 2.1167 | 32.8631 | 11.658 | 26.6192 | 26.6224 | 18.7663 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Davlan/mt5_base_yor_eng_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -166.80 +/- 21.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Davlan/naija-twitter-sentiment-afriberta-large | [
"pytorch",
"tf",
"xlm-roberta",
"text-classification",
"arxiv:2201.08277",
"transformers",
"has_space"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 61 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-finetuned-sst-2-english-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1289
- Accuracy: 0.977
- F1: 0.9878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-base-finetuned-chichewa | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2_loading_script
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2_loading_script dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 5.4661 |
| No log | 2.0 | 30 | 5.0915 |
| No log | 3.0 | 45 | 4.9348 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-large-masakhaner | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,449 | null | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: longformer-base-4096-finetuned-squad2-length-1024-128window
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-base-4096-finetuned-squad2-length-1024-128window
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeadBeast/marathi-roberta-base | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5477951635989807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8081
- Matthews Correlation: 0.5478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.5270 | 0.4182 |
| 0.3451 | 2.0 | 1070 | 0.5017 | 0.4810 |
| 0.2309 | 3.0 | 1605 | 0.5983 | 0.5314 |
| 0.179 | 4.0 | 2140 | 0.7488 | 0.5291 |
| 0.1328 | 5.0 | 2675 | 0.8081 | 0.5478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Declan/Breitbart_model_v7 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Nso_update3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Nso_update3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-nso](https://huggingface.co/Helsinki-NLP/opus-mt-en-nso) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4218
- Bleu: 24.5765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 3.6568 | 1.0 | 867 | 3.0185 | 18.4004 |
| 2.7574 | 2.0 | 1734 | 2.7774 | 20.3167 |
| 2.4522 | 3.0 | 2601 | 2.6436 | 22.1868 |
| 2.3298 | 4.0 | 3468 | 2.5732 | 22.6221 |
| 2.1563 | 5.0 | 4335 | 2.5225 | 22.6937 |
| 2.0177 | 6.0 | 5202 | 2.4917 | 23.2204 |
| 1.9407 | 7.0 | 6069 | 2.4656 | 23.3616 |
| 1.8758 | 8.0 | 6936 | 2.4509 | 23.5496 |
| 1.8167 | 9.0 | 7803 | 2.4426 | 23.6263 |
| 1.7566 | 10.0 | 8670 | 2.4345 | 24.0730 |
| 1.7254 | 11.0 | 9537 | 2.4281 | 24.1627 |
| 1.7088 | 12.0 | 10404 | 2.4252 | 24.1109 |
| 1.6731 | 13.0 | 11271 | 2.4226 | 24.1018 |
| 1.6574 | 14.0 | 12138 | 2.4211 | 23.9186 |
| 1.6481 | 15.0 | 13005 | 2.4218 | 24.1323 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Declan/Breitbart_modelv7 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
language: eo
thumbnail: https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png
widget:
- text: "Jen la komenco de bela <mask>."
- text: "Uno du <mask>"
- text: "Jen finiĝas bela <mask>."
---
# Hello old Windows line breaks
|
Declan/CNN_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- metrics:
- type: mean_reward
value: 117.28 +/- 2.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: mujoco_swimmer
type: mujoco_swimmer
---
A(n) **APPO** model trained on the **mujoco_swimmer** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
Declan/ChicagoTribune_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: notmaineyy/bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# notmaineyy/bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0248
- Validation Loss: 0.0568
- Train Precision: 0.9424
- Train Recall: 0.9471
- Train F1: 0.9448
- Train Accuracy: 0.9863
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10530, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1335 | 0.0705 | 0.9152 | 0.9204 | 0.9178 | 0.9806 | 0 |
| 0.0497 | 0.0562 | 0.9335 | 0.9472 | 0.9403 | 0.9851 | 1 |
| 0.0248 | 0.0568 | 0.9424 | 0.9471 | 0.9448 | 0.9863 | 2 |
### Framework versions
- Transformers 4.21.0
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Declan/FoxNews_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- monai
- medical
license: apache-2.0
---
# Model Overview
A pre-trained model for volumetric (3D) segmentation of brain tumor subregions from multimodal MRIs based on BraTS 2018 data. The whole pipeline is modified from [clara_pt_brain_mri_segmentation](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/med/models/clara_pt_brain_mri_segmentation).
## Workflow
The model is trained to segment 3 nested subregions of primary brain tumors (gliomas): the "enhancing tumor" (ET), the "tumor core" (TC), the "whole tumor" (WT) based on 4 aligned input MRI scans (T1c, T1, T2, FLAIR).
- The ET is described by areas that show hyper intensity in T1c when compared to T1, but also when compared to "healthy" white matter in T1c.
- The TC describes the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (fluid-filled) and the non-enhancing (solid) parts of the tumor.
- The WT describes the complete extent of the disease, as it entails the TC and the peritumoral edema (ED), which is typically depicted by hyper-intense signal in FLAIR.
## Data
The training data is from the [Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018](https://www.med.upenn.edu/sbia/brats2018/data.html).
- Target: 3 tumor subregions
- Task: Segmentation
- Modality: MRI
- Size: 285 3D volumes (4 channels each)
The provided labelled data was partitioned, based on our own split, into training (200 studies), validation (42 studies) and testing (43 studies) datasets.
Please run `scripts/prepare_datalist.py` to produce the data list. The command is like:
```
python scripts/prepare_datalist.py --path your-brats18-dataset-path
```
## Training configuration
This model utilized a similar approach described in 3D MRI brain tumor segmentation
using autoencoder regularization, which was a winning method in BraTS2018 [1]. The training was performed with the following:
- GPU: At least 16GB of GPU memory.
- Actual Model Input: 224 x 224 x 144
- AMP: True
- Optimizer: Adam
- Learning Rate: 1e-4
- Loss: DiceLoss
## Input
Input: 4 channel MRI (4 aligned MRIs T1c, T1, T2, FLAIR at 1x1x1 mm)
1. Normalizing to unit std with zero mean
2. Randomly cropping to (224, 224, 144)
3. Randomly spatial flipping
4. Randomly scaling and shifting intensity of the volume
## Output
Output: 3 channels
- Label 0: TC tumor subregion
- Label 1: WT tumor subregion
- Label 2: ET tumor subregion
## Model Performance
The achieved Dice scores on the validation data are:
- Tumor core (TC): 0.8559
- Whole tumor (WT): 0.9026
- Enhancing tumor (ET): 0.7905
- Average: 0.8518
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Myronenko, Andriy. "3D MRI brain tumor segmentation using autoencoder regularization." International MICCAI Brainlesion Workshop. Springer, Cham, 2018. https://arxiv.org/abs/1810.11654. |
Declan/HuffPost_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Declan/HuffPost_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: mit
---
## NKBert
A BERT model finetuned from a <a href="https://github.com/SKTBrain/KoBERT">KoBERT</a> base on a dataset of North Korean data.
|
Declan/NPR_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: BART_reddit_advice_story
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART_reddit_advice_story
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2552
- Rouge1: 21.9349
- Rouge2: 6.3417
- Rougel: 17.7133
- Rougelsum: 18.7199
- Gen Len: 21.092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3743 | 1.0 | 1875 | 3.2787 | 21.1275 | 5.9618 | 17.3772 | 18.317 | 20.447 |
| 3.025 | 2.0 | 3750 | 3.2466 | 21.8443 | 6.2351 | 17.6358 | 18.6259 | 21.506 |
| 2.7628 | 3.0 | 5625 | 3.2552 | 21.9349 | 6.3417 | 17.7133 | 18.7199 | 21.092 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Declan/NPR_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lakshaywadhwa1993/mt5-small-finetuned-hindi-mt5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lakshaywadhwa1993/mt5-small-finetuned-hindi-mt5
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4909
- Validation Loss: 1.3507
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 41000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.5310 | 1.8341 | 0 |
| 2.0735 | 1.6193 | 1 |
| 1.7617 | 1.4672 | 2 |
| 1.6375 | 1.4271 | 3 |
| 1.5712 | 1.3720 | 4 |
| 1.5294 | 1.3656 | 5 |
| 1.5051 | 1.3531 | 6 |
| 1.4909 | 1.3507 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aalogan/bert-ner-nsm2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aalogan/bert-ner-nsm2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0649
- Validation Loss: 0.1762
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2982, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4885 | 0.2361 | 0 |
| 0.1547 | 0.1920 | 1 |
| 0.0966 | 0.1648 | 2 |
| 0.0649 | 0.1762 | 3 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Declan/Reuters_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ViT-chess-V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT-chess-V4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2867
- Accuracy: 0.1942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 5.4877 | 1.0 | 45000 | 5.4554 | 0.1044 |
| 4.9794 | 2.0 | 90000 | 5.0001 | 0.1371 |
| 4.5956 | 3.0 | 135000 | 4.6720 | 0.1596 |
| 4.3402 | 4.0 | 180000 | 4.4082 | 0.1834 |
| 4.097 | 5.0 | 225000 | 4.2867 | 0.1942 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-5M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: Nso-En_update3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Nso-En_update3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-nso-en](https://huggingface.co/Helsinki-NLP/opus-mt-nso-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6854
- Bleu: 21.2223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 3.5054 | 1.0 | 1734 | 3.1912 | 16.5243 |
| 3.0368 | 2.0 | 3468 | 2.9680 | 18.0237 |
| 2.6866 | 3.0 | 5202 | 2.8594 | 19.5515 |
| 2.51 | 4.0 | 6936 | 2.7916 | 20.1468 |
| 2.3754 | 5.0 | 8670 | 2.7438 | 20.0535 |
| 2.2534 | 6.0 | 10404 | 2.7186 | 20.7329 |
| 2.144 | 7.0 | 12138 | 2.7034 | 20.9116 |
| 2.0709 | 8.0 | 13872 | 2.6945 | 21.0866 |
| 2.0191 | 9.0 | 15606 | 2.6880 | 21.1577 |
| 1.9973 | 10.0 | 17340 | 2.6854 | 21.1386 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-5M-MTR | [
"pytorch",
"roberta",
"transformers"
] | null | {
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TestZee/t5-small-finetuned-kaggle-data-t5-v3.0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TestZee/t5-small-finetuned-kaggle-data-t5-v3.0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6248
- Validation Loss: 1.6558
- Train Rouge1: 26.3006
- Train Rouge2: 15.0931
- Train Rougel: 22.7561
- Train Rougelsum: 24.3816
- Train Gen Len: 19.0
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.1318 | 1.8436 | 24.0637 | 12.9655 | 20.6308 | 22.1857 | 19.0 | 0 |
| 2.0035 | 1.7955 | 24.9502 | 13.7602 | 21.4422 | 23.0424 | 19.0 | 1 |
| 1.9561 | 1.7670 | 25.6590 | 14.5211 | 22.0967 | 23.5134 | 19.0 | 2 |
| 1.9227 | 1.7496 | 25.8863 | 14.7209 | 22.3661 | 23.8629 | 19.0 | 3 |
| 1.8951 | 1.7334 | 26.0026 | 14.7861 | 22.4126 | 23.8936 | 19.0 | 4 |
| 1.8716 | 1.7234 | 26.3796 | 14.9421 | 22.7097 | 24.2118 | 19.0 | 5 |
| 1.8558 | 1.7138 | 26.2830 | 14.9347 | 22.8008 | 24.1908 | 19.0 | 6 |
| 1.8362 | 1.7072 | 26.0811 | 14.6698 | 22.5673 | 23.9941 | 19.0 | 7 |
| 1.8222 | 1.7020 | 26.0600 | 14.8445 | 22.6614 | 23.9462 | 19.0 | 8 |
| 1.8086 | 1.6929 | 26.3903 | 15.0590 | 22.9725 | 24.3007 | 19.0 | 9 |
| 1.7958 | 1.6870 | 26.2563 | 14.8773 | 22.7601 | 24.1487 | 19.0 | 10 |
| 1.7802 | 1.6847 | 26.2638 | 15.0330 | 22.8279 | 24.2225 | 19.0 | 11 |
| 1.7709 | 1.6823 | 26.0351 | 14.9826 | 22.6653 | 24.0415 | 19.0 | 12 |
| 1.7610 | 1.6796 | 26.1864 | 15.0833 | 22.7959 | 24.1713 | 19.0 | 13 |
| 1.7486 | 1.6754 | 26.2693 | 15.2384 | 22.8580 | 24.2483 | 19.0 | 14 |
| 1.7354 | 1.6744 | 26.1257 | 14.9953 | 22.7029 | 24.0956 | 19.0 | 15 |
| 1.7262 | 1.6740 | 26.1954 | 15.0393 | 22.8311 | 24.1282 | 19.0 | 16 |
| 1.7206 | 1.6703 | 26.1409 | 14.9949 | 22.7586 | 24.1355 | 19.0 | 17 |
| 1.7083 | 1.6663 | 26.1880 | 15.1119 | 22.7500 | 24.1816 | 19.0 | 18 |
| 1.7002 | 1.6662 | 25.9666 | 14.9556 | 22.5439 | 23.9713 | 19.0 | 19 |
| 1.6926 | 1.6654 | 26.1649 | 15.1911 | 22.8287 | 24.2002 | 19.0 | 20 |
| 1.6839 | 1.6589 | 26.2105 | 15.0021 | 22.7778 | 24.2852 | 19.0 | 21 |
| 1.6768 | 1.6596 | 26.1263 | 14.8676 | 22.6634 | 24.1171 | 19.0 | 22 |
| 1.6670 | 1.6612 | 25.9718 | 14.8101 | 22.5048 | 23.9592 | 19.0 | 23 |
| 1.6604 | 1.6590 | 26.2419 | 15.0633 | 22.7685 | 24.3165 | 19.0 | 24 |
| 1.6498 | 1.6564 | 26.2757 | 15.0082 | 22.8157 | 24.3126 | 19.0 | 25 |
| 1.6455 | 1.6570 | 26.2307 | 14.9338 | 22.6259 | 24.2636 | 19.0 | 26 |
| 1.6368 | 1.6573 | 26.4114 | 15.3485 | 22.9117 | 24.4928 | 19.0 | 27 |
| 1.6325 | 1.6547 | 26.5272 | 15.4393 | 23.0764 | 24.6935 | 19.0 | 28 |
| 1.6248 | 1.6558 | 26.3006 | 15.0931 | 22.7561 | 24.3816 | 19.0 | 29 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-77M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,416 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ES_corlec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ES_corlec
This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-77M-MTR | [
"pytorch",
"roberta",
"transformers"
] | null | {
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,169 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: go2k/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DeepChem/SmilesTokenizer_PubChem_1M | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 227 | null | ---
language:
- ru
tags:
- PyTorch
- Transformers
license: apache-2.0
widget:
- text: "sbert punc case расставляет точки запятые и знаки вопроса вам нравится"
---
# SbertPuncCase
SbertPuncCase - модель восстановления пунктуации и регистра для русского языка. Модель способна расставлять точки, запятые и знаки вопроса;
определять регистр - слово в нижнем регистре, слово с первой буквой в верхнем регистре, слово в верхнем регистре.
Модель разработана для восстановления текста после распознавания речи, поэтому работает со строками в нижнем регистре.
В основу модели легла [sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru).
В качестве обучающих данных использованы текстовые расшифровки интервью.
# Как это работает
1. Текст переводится в нижний регистр и разбивается на слова.
2. Слова разделяются на токены.
3. Модель (по аналогии с задачей NER) предсказывает класс для каждого токена. Классификация на 12 классов: 3+1 знака препинания * 3 варианта регистра.
4. Функция декодировки восстанавливает текст соответственно предсказанным классам.
# Как использовать
Код модели находится в файле `sbert-punc-case-ru/sbertpunccase.py`.
Для быстрой установки можно воспользоваться командой:
```
pip install git+https://huggingface.co/kontur-ai/sbert_punc_case_ru
```
Использование модели:
```
from sbert_punc_case_ru import SbertPuncCase
model = SbertPuncCase()
model.punctuate("sbert punc case расставляет точки запятые и знаки вопроса вам нравится")
```
# Авторы
[Альмира Муртазина](https://github.com/almiradreamer)
[Александр Абугалиев](https://github.com/Squire-tomsk) |
DeepESP/gpt2-spanish | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,463 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab-testing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab-testing
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
DeepPavlov/bert-base-bg-cs-pl-ru-cased | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"bg",
"cs",
"pl",
"ru",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,614 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new_0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new_0060
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8691
- Validation Loss: 2.7610
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.6632 | 4.5153 | 0 |
| 4.4292 | 4.0923 | 1 |
| 4.1169 | 3.8723 | 2 |
| 3.9326 | 3.7260 | 3 |
| 3.8026 | 3.6281 | 4 |
| 3.7045 | 3.5355 | 5 |
| 3.6254 | 3.4645 | 6 |
| 3.5604 | 3.4093 | 7 |
| 3.5048 | 3.3587 | 8 |
| 3.4569 | 3.3136 | 9 |
| 3.4155 | 3.2778 | 10 |
| 3.3791 | 3.2443 | 11 |
| 3.3470 | 3.2157 | 12 |
| 3.3183 | 3.1854 | 13 |
| 3.2922 | 3.1642 | 14 |
| 3.2685 | 3.1400 | 15 |
| 3.2467 | 3.1193 | 16 |
| 3.2267 | 3.1009 | 17 |
| 3.2078 | 3.0838 | 18 |
| 3.1904 | 3.0689 | 19 |
| 3.1739 | 3.0520 | 20 |
| 3.1584 | 3.0379 | 21 |
| 3.1438 | 3.0255 | 22 |
| 3.1300 | 3.0116 | 23 |
| 3.1168 | 2.9965 | 24 |
| 3.1044 | 2.9866 | 25 |
| 3.0925 | 2.9752 | 26 |
| 3.0812 | 2.9631 | 27 |
| 3.0704 | 2.9539 | 28 |
| 3.0601 | 2.9458 | 29 |
| 3.0502 | 2.9340 | 30 |
| 3.0408 | 2.9251 | 31 |
| 3.0317 | 2.9179 | 32 |
| 3.0230 | 2.9082 | 33 |
| 3.0147 | 2.9002 | 34 |
| 3.0065 | 2.8948 | 35 |
| 2.9987 | 2.8855 | 36 |
| 2.9911 | 2.8779 | 37 |
| 2.9838 | 2.8706 | 38 |
| 2.9767 | 2.8643 | 39 |
| 2.9698 | 2.8570 | 40 |
| 2.9632 | 2.8501 | 41 |
| 2.9567 | 2.8441 | 42 |
| 2.9505 | 2.8385 | 43 |
| 2.9445 | 2.8327 | 44 |
| 2.9385 | 2.8260 | 45 |
| 2.9329 | 2.8213 | 46 |
| 2.9272 | 2.8160 | 47 |
| 2.9217 | 2.8107 | 48 |
| 2.9162 | 2.8052 | 49 |
| 2.9110 | 2.8020 | 50 |
| 2.9060 | 2.7938 | 51 |
| 2.9010 | 2.7896 | 52 |
| 2.8962 | 2.7857 | 53 |
| 2.8913 | 2.7827 | 54 |
| 2.8866 | 2.7768 | 55 |
| 2.8821 | 2.7724 | 56 |
| 2.8776 | 2.7679 | 57 |
| 2.8733 | 2.7642 | 58 |
| 2.8691 | 2.7610 | 59 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepPavlov/bert-base-multilingual-cased-sentence | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"multilingual",
"arxiv:1704.05426",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 140 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/flowers-102-categories
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-flowers-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/flowers-102-categories` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/anton-l/ddpm-ema-flowers-64/tensorboard?#scalars)
|
DeepPavlov/marianmt-tatoeba-ruen | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/lpachter/1658405511004/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1257000705761525760/R7Pphmei_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lior Pachter</div>
<div style="text-align: center; font-size: 14px;">@lpachter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lior Pachter.
| Data | Lior Pachter |
| --- | --- |
| Tweets downloaded | 3232 |
| Retweets | 1213 |
| Short tweets | 245 |
| Tweets kept | 1774 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rt1wriv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lpachter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23sx643q) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23sx643q/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lpachter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: afl-3.0
---
### Time: 2020/07/10
### ICAN-AI
|
DeltaHub/adapter_t5-3b_qnli | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- abhishek/autotrain-data-summtest1
co2_eq_emissions: 28.375764585180136
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 11405516
- CO2 Emissions (in grams): 28.375764585180136
## Validation Metrics
- Loss: 1.5257819890975952
- Rouge1: 41.9534
- Rouge2: 18.5044
- RougeL: 34.7507
- RougeLsum: 38.6091
- Gen Len: 15.1037
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/abhishek/autotrain-summtest1-11405516
``` |
DeskDown/MarianMixFT_en-id | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2447.40 +/- 23.14
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **PPO** Agent playing **AntBulletEnv-v0**
This is a trained model of a **PPO** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
MODEL
model = PPO(policy = "MlpPolicy",
env = env,
batch_size = 256,
clip_range = 0.4,
ent_coef = 0.0,
gae_lambda = 0.92,
gamma = 0.99,
learning_rate = 3.0e-05,
max_grad_norm = 0.5,
n_epochs = 30,
n_steps = 512,
policy_kwargs = dict(log_std_init=-2, ortho_init=False, activation_fn=nn.ReLU, net_arch=[dict(pi=[256,
256], vf=[256, 256])] ),
use_sde = True,
sde_sample_freq = 4,
vf_coef = 0.5,
tensorboard_log = "./tensorboard",
verbose=1)
model.learn(1_000_000) |
DeskDown/MarianMixFT_en-ja | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: BART_reddit_gaming
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART_reddit_gaming
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7373
- Rouge1: 18.1202
- Rouge2: 4.6045
- Rougel: 15.1273
- Rougelsum: 15.7601
- Gen Len: 18.208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.864 | 1.0 | 1875 | 3.7752 | 17.3754 | 4.51 | 14.6763 | 15.22 | 16.944 |
| 3.4755 | 2.0 | 3750 | 3.7265 | 17.8066 | 4.4188 | 14.9432 | 15.5396 | 18.104 |
| 3.2629 | 3.0 | 5625 | 3.7373 | 18.1202 | 4.6045 | 15.1273 | 15.7601 | 18.208 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeskDown/MarianMixFT_en-ms | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 173.30 +/- 30.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Dimedrolza/DialoGPT-small-cyberpunk | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | oghdogspsdfughuisdfhgsudfigdfg
https://www.xing.com/events/new |
Doogie/Waynehills-KE-T5-doogie | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Scout DialoGPT Model |
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 194.47 +/- 82.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-07-22T00:14:47Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -108.20 +/- 27.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2022-07-22T00:21:50Z | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4279
- Wer: 1.0087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.209 | 1.0 | 72 | 2.5599 | 0.9889 |
| 1.3395 | 2.0 | 144 | 2.7188 | 0.9877 |
| 1.2695 | 3.0 | 216 | 2.9989 | 0.9709 |
| 1.2818 | 4.0 | 288 | 3.2352 | 0.9757 |
| 1.2389 | 5.0 | 360 | 3.6867 | 0.9783 |
| 1.2368 | 6.0 | 432 | 3.3189 | 0.9811 |
| 1.2307 | 7.0 | 504 | 3.0786 | 0.9657 |
| 1.2607 | 8.0 | 576 | 2.9720 | 0.9677 |
| 1.2584 | 9.0 | 648 | 2.5613 | 0.9702 |
| 1.2266 | 10.0 | 720 | 2.6937 | 0.9610 |
| 1.262 | 11.0 | 792 | 3.9060 | 0.9745 |
| 1.2361 | 12.0 | 864 | 3.6138 | 0.9718 |
| 1.2348 | 13.0 | 936 | 3.4838 | 0.9745 |
| 1.2715 | 14.0 | 1008 | 3.3128 | 0.9751 |
| 1.2505 | 15.0 | 1080 | 3.2015 | 0.9710 |
| 1.211 | 16.0 | 1152 | 3.4709 | 0.9709 |
| 1.2067 | 17.0 | 1224 | 3.0566 | 0.9673 |
| 1.2536 | 18.0 | 1296 | 2.5479 | 0.9789 |
| 1.2297 | 19.0 | 1368 | 2.8307 | 0.9710 |
| 1.1949 | 20.0 | 1440 | 3.4112 | 0.9777 |
| 1.2181 | 21.0 | 1512 | 2.6784 | 0.9682 |
| 1.195 | 22.0 | 1584 | 3.0395 | 0.9639 |
| 1.2047 | 23.0 | 1656 | 3.1935 | 0.9726 |
| 1.2306 | 24.0 | 1728 | 3.2649 | 0.9723 |
| 1.199 | 25.0 | 1800 | 3.1378 | 0.9645 |
| 1.1945 | 26.0 | 1872 | 2.8143 | 0.9596 |
| 1.19 | 27.0 | 1944 | 3.5174 | 0.9787 |
| 1.1976 | 28.0 | 2016 | 2.9666 | 0.9594 |
| 1.2229 | 29.0 | 2088 | 2.8672 | 0.9589 |
| 1.1548 | 30.0 | 2160 | 2.6568 | 0.9627 |
| 1.169 | 31.0 | 2232 | 2.8799 | 0.9654 |
| 1.1857 | 32.0 | 2304 | 2.8691 | 0.9625 |
| 1.1862 | 33.0 | 2376 | 2.8251 | 0.9555 |
| 1.1721 | 34.0 | 2448 | 3.5968 | 0.9726 |
| 1.1293 | 35.0 | 2520 | 3.4130 | 0.9651 |
| 1.1513 | 36.0 | 2592 | 2.8804 | 0.9630 |
| 1.1537 | 37.0 | 2664 | 2.5824 | 0.9575 |
| 1.1818 | 38.0 | 2736 | 2.8443 | 0.9613 |
| 1.1835 | 39.0 | 2808 | 2.6431 | 0.9619 |
| 1.1457 | 40.0 | 2880 | 2.9254 | 0.9639 |
| 1.1591 | 41.0 | 2952 | 2.8194 | 0.9561 |
| 1.1284 | 42.0 | 3024 | 2.6432 | 0.9806 |
| 1.1602 | 43.0 | 3096 | 2.4279 | 1.0087 |
| 1.1556 | 44.0 | 3168 | 2.5040 | 1.0030 |
| 1.1256 | 45.0 | 3240 | 3.1641 | 0.9608 |
| 1.1256 | 46.0 | 3312 | 2.9522 | 0.9677 |
| 1.1211 | 47.0 | 3384 | 2.6318 | 0.9580 |
| 1.1142 | 48.0 | 3456 | 2.7298 | 0.9533 |
| 1.1237 | 49.0 | 3528 | 2.5442 | 0.9673 |
| 1.0976 | 50.0 | 3600 | 2.7767 | 0.9610 |
| 1.1154 | 51.0 | 3672 | 2.6849 | 0.9646 |
| 1.1012 | 52.0 | 3744 | 2.5384 | 0.9621 |
| 1.1077 | 53.0 | 3816 | 2.4505 | 1.0067 |
| 1.0936 | 54.0 | 3888 | 2.5847 | 0.9687 |
| 1.0772 | 55.0 | 3960 | 2.4575 | 0.9761 |
| 1.092 | 56.0 | 4032 | 2.4889 | 0.9802 |
| 1.0868 | 57.0 | 4104 | 2.5885 | 0.9664 |
| 1.0979 | 58.0 | 4176 | 2.6370 | 0.9607 |
| 1.094 | 59.0 | 4248 | 2.6195 | 0.9605 |
| 1.0745 | 60.0 | 4320 | 2.5346 | 0.9834 |
| 1.1057 | 61.0 | 4392 | 2.6879 | 0.9603 |
| 1.0722 | 62.0 | 4464 | 2.5426 | 0.9735 |
| 1.0731 | 63.0 | 4536 | 2.8259 | 0.9535 |
| 1.0862 | 64.0 | 4608 | 2.7632 | 0.9559 |
| 1.0396 | 65.0 | 4680 | 2.5401 | 0.9807 |
| 1.0581 | 66.0 | 4752 | 2.6977 | 0.9687 |
| 1.0647 | 67.0 | 4824 | 2.6968 | 0.9694 |
| 1.0549 | 68.0 | 4896 | 2.6439 | 0.9807 |
| 1.0607 | 69.0 | 4968 | 2.6822 | 0.9771 |
| 1.05 | 70.0 | 5040 | 2.7011 | 0.9607 |
| 1.042 | 71.0 | 5112 | 2.5766 | 0.9713 |
| 1.042 | 72.0 | 5184 | 2.5720 | 0.9747 |
| 1.0594 | 73.0 | 5256 | 2.7176 | 0.9704 |
| 1.0425 | 74.0 | 5328 | 2.7458 | 0.9614 |
| 1.0199 | 75.0 | 5400 | 2.5906 | 0.9987 |
| 1.0198 | 76.0 | 5472 | 2.5534 | 1.0087 |
| 1.0193 | 77.0 | 5544 | 2.5421 | 0.9933 |
| 1.0379 | 78.0 | 5616 | 2.5139 | 0.9994 |
| 1.025 | 79.0 | 5688 | 2.4850 | 1.0313 |
| 1.0054 | 80.0 | 5760 | 2.5803 | 0.9814 |
| 1.0218 | 81.0 | 5832 | 2.5696 | 0.9867 |
| 1.0177 | 82.0 | 5904 | 2.6011 | 1.0065 |
| 1.0094 | 83.0 | 5976 | 2.6166 | 0.9855 |
| 1.0202 | 84.0 | 6048 | 2.5557 | 1.0204 |
| 1.0148 | 85.0 | 6120 | 2.6118 | 1.0033 |
| 1.0117 | 86.0 | 6192 | 2.5671 | 1.0120 |
| 1.0195 | 87.0 | 6264 | 2.5443 | 1.0041 |
| 1.0114 | 88.0 | 6336 | 2.5627 | 1.0049 |
| 1.0074 | 89.0 | 6408 | 2.5670 | 1.0255 |
| 0.9883 | 90.0 | 6480 | 2.5338 | 1.0306 |
| 1.0112 | 91.0 | 6552 | 2.5615 | 1.0142 |
| 0.9986 | 92.0 | 6624 | 2.5566 | 1.0415 |
| 0.9939 | 93.0 | 6696 | 2.5728 | 1.0287 |
| 0.9954 | 94.0 | 6768 | 2.5617 | 1.0138 |
| 0.9643 | 95.0 | 6840 | 2.5890 | 1.0145 |
| 0.9892 | 96.0 | 6912 | 2.5918 | 1.0119 |
| 0.983 | 97.0 | 6984 | 2.5862 | 1.0175 |
| 0.988 | 98.0 | 7056 | 2.5873 | 1.0147 |
| 0.9908 | 99.0 | 7128 | 2.5973 | 1.0073 |
| 0.9696 | 100.0 | 7200 | 2.5938 | 1.0156 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-07-22T00:41:06Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole1
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | 2022-07-22T01:46:42Z | ---
license: mit
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-kde4-en-to-pt_BR
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-pt_BR
metrics:
- name: Bleu
type: bleu
value: 58.31959113813223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-kde4-en-to-pt_BR
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5150
- Bleu: 58.3196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-07-22T02:28:44Z | ---
tags:
- monai
license: apache-2.0
---
# Test bundle |
bert-base-german-dbmdz-cased | [
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,814 | 2022-07-22T03:25:34Z | ---
language: en
thumbnail: http://www.huggingtweets.com/hotwingsuk/1658460403599/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1280474754214957056/GKqk3gAm_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">HotWings</div>
<div style="text-align: center; font-size: 14px;">@hotwingsuk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from HotWings.
| Data | HotWings |
| --- | --- |
| Tweets downloaded | 2057 |
| Retweets | 69 |
| Short tweets | 258 |
| Tweets kept | 1730 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3opu8h6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hotwingsuk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bzf76pmf) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bzf76pmf/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hotwingsuk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2022-07-22T04:29:53Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new2_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new2_0040
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5812
- Validation Loss: 2.4689
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6241 | 2.5129 | 0 |
| 2.6228 | 2.5112 | 1 |
| 2.6216 | 2.5105 | 2 |
| 2.6204 | 2.5101 | 3 |
| 2.6191 | 2.5088 | 4 |
| 2.6180 | 2.5064 | 5 |
| 2.6166 | 2.5045 | 6 |
| 2.6155 | 2.5038 | 7 |
| 2.6143 | 2.5024 | 8 |
| 2.6132 | 2.5009 | 9 |
| 2.6120 | 2.5014 | 10 |
| 2.6108 | 2.4984 | 11 |
| 2.6097 | 2.4983 | 12 |
| 2.6085 | 2.4976 | 13 |
| 2.6073 | 2.4948 | 14 |
| 2.6064 | 2.4945 | 15 |
| 2.6052 | 2.4939 | 16 |
| 2.6039 | 2.4925 | 17 |
| 2.6030 | 2.4912 | 18 |
| 2.6019 | 2.4890 | 19 |
| 2.6007 | 2.4889 | 20 |
| 2.5998 | 2.4872 | 21 |
| 2.5987 | 2.4865 | 22 |
| 2.5977 | 2.4859 | 23 |
| 2.5965 | 2.4844 | 24 |
| 2.5956 | 2.4834 | 25 |
| 2.5944 | 2.4821 | 26 |
| 2.5934 | 2.4805 | 27 |
| 2.5925 | 2.4790 | 28 |
| 2.5914 | 2.4798 | 29 |
| 2.5904 | 2.4777 | 30 |
| 2.5893 | 2.4781 | 31 |
| 2.5883 | 2.4755 | 32 |
| 2.5872 | 2.4763 | 33 |
| 2.5862 | 2.4743 | 34 |
| 2.5851 | 2.4736 | 35 |
| 2.5841 | 2.4732 | 36 |
| 2.5831 | 2.4726 | 37 |
| 2.5822 | 2.4691 | 38 |
| 2.5812 | 2.4689 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | 2022-07-22T04:35:55Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8503293209175562
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1354
- F1: 0.8503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 132 | 0.1757 | 0.8055 |
| No log | 2.0 | 264 | 0.1372 | 0.8424 |
| No log | 3.0 | 396 | 0.1354 | 0.8503 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2022-07-22T04:36:21Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_new2_0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_new2_0020
This model is a fine-tuned version of [/content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105](https://huggingface.co//content/drive/MyDrive/Colab Notebooks/oscar/trybackup_distilbert/new_backup_0105105) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9920
- Validation Loss: 0.9688
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0180 | 0.9873 | 0 |
| 1.0163 | 0.9878 | 1 |
| 1.0145 | 0.9856 | 2 |
| 1.0139 | 0.9830 | 3 |
| 1.0122 | 0.9831 | 4 |
| 1.0118 | 0.9830 | 5 |
| 1.0094 | 0.9800 | 6 |
| 1.0075 | 0.9809 | 7 |
| 1.0066 | 0.9784 | 8 |
| 1.0062 | 0.9768 | 9 |
| 1.0032 | 0.9751 | 10 |
| 1.0023 | 0.9764 | 11 |
| 1.0008 | 0.9735 | 12 |
| 0.9994 | 0.9730 | 13 |
| 0.9986 | 0.9761 | 14 |
| 0.9975 | 0.9714 | 15 |
| 0.9953 | 0.9708 | 16 |
| 0.9941 | 0.9683 | 17 |
| 0.9933 | 0.9681 | 18 |
| 0.9920 | 0.9688 | 19 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | 2022-07-22T05:15:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9213674244320441
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
- Accuracy: 0.921
- F1: 0.9214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8255 | 1.0 | 250 | 0.3172 | 0.9055 | 0.9039 |
| 0.2506 | 2.0 | 500 | 0.2197 | 0.921 | 0.9214 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2022-07-22T05:25:23Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1632
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 179 | 0.1842 | 0.8256 |
| No log | 2.0 | 358 | 0.1720 | 0.8395 |
| No log | 3.0 | 537 | 0.1632 | 0.8505 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
distilbert-base-german-cased | [
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43,667 | 2022-07-22T05:29:59Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bio_ClinicalBERT-zero-shot-finetuned-50cad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-zero-shot-finetuned-50cad
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1475
- Accuracy: 0.5
- F1: 0.6667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
distilbert-base-multilingual-cased | [
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,339,633 | 2022-07-22T05:38:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8151120026746907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2880
- F1: 0.8151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 48 | 0.3642 | 0.7463 |
| No log | 2.0 | 96 | 0.3007 | 0.7975 |
| No log | 3.0 | 144 | 0.2880 | 0.8151 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
distilbert-base-uncased-distilled-squad | [
"pytorch",
"tf",
"tflite",
"coreml",
"safetensors",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 100,097 | 2022-07-22T05:43:41Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bio_ClinicalBERT-zero-shot-finetuned-50noncad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bio_ClinicalBERT-zero-shot-finetuned-50noncad
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8046
- Accuracy: 0.5
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
distilroberta-base | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,342,240 | 2022-07-22T05:50:43Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- F1: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 209 | 0.1990 | 0.8088 |
| No log | 2.0 | 418 | 0.1748 | 0.8426 |
| No log | 3.0 | 627 | 0.1748 | 0.8467 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
0xDEADBEA7/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-07-22T11:00:05Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper1_mesum5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper1_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6401
- Accuracy: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9352 | 0.23 | 100 | 3.8550 | 0.1959 |
| 3.1536 | 0.47 | 200 | 3.1755 | 0.2888 |
| 2.6937 | 0.7 | 300 | 2.6332 | 0.4272 |
| 2.3748 | 0.93 | 400 | 2.2833 | 0.4970 |
| 1.5575 | 1.16 | 500 | 1.8712 | 0.5888 |
| 1.4063 | 1.4 | 600 | 1.6048 | 0.6314 |
| 1.1841 | 1.63 | 700 | 1.4109 | 0.6621 |
| 1.0857 | 1.86 | 800 | 1.1832 | 0.7112 |
| 0.582 | 2.09 | 900 | 1.0371 | 0.7479 |
| 0.5971 | 2.33 | 1000 | 0.9839 | 0.7462 |
| 0.4617 | 2.56 | 1100 | 0.9233 | 0.7657 |
| 0.4621 | 2.79 | 1200 | 0.8417 | 0.7828 |
| 0.2128 | 3.02 | 1300 | 0.7644 | 0.7970 |
| 0.1883 | 3.26 | 1400 | 0.7001 | 0.8183 |
| 0.1501 | 3.49 | 1500 | 0.6826 | 0.8201 |
| 0.1626 | 3.72 | 1600 | 0.6568 | 0.8254 |
| 0.1053 | 3.95 | 1700 | 0.6401 | 0.8278 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ASCCCCCCCC/PMJ | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/666311094256971779/rhb7qkCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1402771730582622212/gwApDT26_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucien Greaves & Sean Hannity</div>
<div style="text-align: center; font-size: 14px;">@luciengreaves-seanhannity</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucien Greaves & Sean Hannity.
| Data | Lucien Greaves | Sean Hannity |
| --- | --- | --- |
| Tweets downloaded | 3197 | 3250 |
| Retweets | 536 | 13 |
| Short tweets | 379 | 60 |
| Tweets kept | 2282 | 3177 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2iwc0kes/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luciengreaves-seanhannity's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2db4oami) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2db4oami/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/luciengreaves-seanhannity')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AT/distilgpt2-finetuned-wikitext2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-07-23T01:01:07Z | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_3mm_many_negatives_spanpred_attention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_3mm_many_negatives_spanpred_attention
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Margin Accuracy: 0.8969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.3149 | 0.16 | 60 | 0.3098 | 0.8608 |
| 0.2754 | 0.32 | 120 | 0.2725 | 0.8733 |
| 0.2619 | 0.48 | 180 | 0.2512 | 0.8872 |
| 0.2378 | 0.64 | 240 | 0.2391 | 0.8925 |
| 0.2451 | 0.8 | 300 | 0.2305 | 0.8943 |
| 0.2357 | 0.96 | 360 | 0.2292 | 0.8949 |
| 0.2335 | 1.12 | 420 | 0.2269 | 0.8952 |
| 0.2403 | 1.28 | 480 | 0.2213 | 0.8957 |
| 0.2302 | 1.44 | 540 | 0.2227 | 0.8963 |
| 0.2353 | 1.6 | 600 | 0.2222 | 0.8961 |
| 0.2271 | 1.76 | 660 | 0.2207 | 0.8964 |
| 0.228 | 1.92 | 720 | 0.2218 | 0.8967 |
| 0.2231 | 2.08 | 780 | 0.2201 | 0.8967 |
| 0.2128 | 2.24 | 840 | 0.2219 | 0.8967 |
| 0.2186 | 2.4 | 900 | 0.2202 | 0.8967 |
| 0.2245 | 2.56 | 960 | 0.2205 | 0.8969 |
| 0.2158 | 2.72 | 1020 | 0.2196 | 0.8969 |
| 0.2106 | 2.88 | 1080 | 0.2192 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
AbdelrahmanZayed/my-awesome-model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5866 | 0.34 | 100 | 1.9116 |
| 1.9674 | 0.68 | 200 | 1.7280 |
| 1.8487 | 1.02 | 300 | 1.6650 |
| 1.7429 | 1.36 | 400 | 1.6400 |
| 1.7148 | 1.69 | 500 | 1.6214 |
| 1.695 | 2.03 | 600 | 1.6076 |
| 1.6321 | 2.37 | 700 | 1.5979 |
| 1.6276 | 2.71 | 800 | 1.5910 |
| 1.6171 | 3.05 | 900 | 1.5875 |
| 1.5712 | 3.39 | 1000 | 1.5898 |
| 1.5702 | 3.73 | 1100 | 1.5749 |
| 1.5594 | 4.07 | 1200 | 1.5798 |
| 1.5352 | 4.41 | 1300 | 1.5733 |
| 1.5228 | 4.75 | 1400 | 1.5733 |
| 1.524 | 5.08 | 1500 | 1.5727 |
| 1.4954 | 5.42 | 1600 | 1.5699 |
| 1.4866 | 5.76 | 1700 | 1.5696 |
| 1.5089 | 6.1 | 1800 | 1.5696 |
| 1.4771 | 6.44 | 1900 | 1.5736 |
| 1.4772 | 6.78 | 2000 | 1.5679 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AdapterHub/bert-base-uncased-pf-record | [
"bert",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:rc/record"
] | text-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 547.00 +/- 194.62
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kuro96 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Kuro96
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AdapterHub/roberta-base-pf-drop | [
"roberta",
"en",
"dataset:drop",
"arxiv:2104.08247",
"adapter-transformers",
"question-answering"
] | question-answering | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5899772209567198
- name: Recall
type: recall
value: 0.4117647058823529
- name: F1
type: f1
value: 0.4850187265917604
- name: Accuracy
type: accuracy
value: 0.9304392705585502
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3202
- Precision: 0.5900
- Recall: 0.4118
- F1: 0.4850
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.3469 | 0.5480 | 0.2814 | 0.3718 | 0.9193 |
| No log | 2.0 | 426 | 0.3135 | 0.5909 | 0.3903 | 0.4701 | 0.9281 |
| 0.1903 | 3.0 | 639 | 0.3202 | 0.5900 | 0.4118 | 0.4850 | 0.9304 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AdapterHub/roberta-base-pf-ud_pos | [
"roberta",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"adapter-transformers",
"token-classification",
"adapterhub:pos/ud_ewt"
] | token-classification | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-generated-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-generated-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0094
- F1: 0.9839
- Precision: 0.9849
- Recall: 0.9828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.009 | 1.0 | 10387 | 0.0104 | 0.9722 | 0.9919 | 0.9533 |
| 0.0013 | 2.0 | 20774 | 0.0067 | 0.9825 | 0.9844 | 0.9805 |
| 0.0006 | 3.0 | 31161 | 0.0077 | 0.9843 | 0.9902 | 0.9786 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Adinda/Adinda | [
"license:artistic-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="th1s1s1t/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Ahren09/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new3_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new3_0040
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5130
- Validation Loss: 2.3972
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5407 | 2.4254 | 0 |
| 2.5399 | 2.4247 | 1 |
| 2.5391 | 2.4238 | 2 |
| 2.5383 | 2.4232 | 3 |
| 2.5375 | 2.4210 | 4 |
| 2.5368 | 2.4210 | 5 |
| 2.5361 | 2.4197 | 6 |
| 2.5353 | 2.4193 | 7 |
| 2.5345 | 2.4191 | 8 |
| 2.5339 | 2.4177 | 9 |
| 2.5332 | 2.4188 | 10 |
| 2.5324 | 2.4160 | 11 |
| 2.5317 | 2.4164 | 12 |
| 2.5309 | 2.4145 | 13 |
| 2.5302 | 2.4153 | 14 |
| 2.5295 | 2.4139 | 15 |
| 2.5288 | 2.4134 | 16 |
| 2.5282 | 2.4123 | 17 |
| 2.5274 | 2.4116 | 18 |
| 2.5267 | 2.4110 | 19 |
| 2.5259 | 2.4106 | 20 |
| 2.5251 | 2.4097 | 21 |
| 2.5244 | 2.4074 | 22 |
| 2.5238 | 2.4078 | 23 |
| 2.5232 | 2.4072 | 24 |
| 2.5223 | 2.4062 | 25 |
| 2.5217 | 2.4054 | 26 |
| 2.5211 | 2.4057 | 27 |
| 2.5204 | 2.4044 | 28 |
| 2.5197 | 2.4026 | 29 |
| 2.5189 | 2.4017 | 30 |
| 2.5182 | 2.4026 | 31 |
| 2.5176 | 2.4012 | 32 |
| 2.5168 | 2.4013 | 33 |
| 2.5161 | 2.3990 | 34 |
| 2.5154 | 2.3999 | 35 |
| 2.5149 | 2.3978 | 36 |
| 2.5142 | 2.3981 | 37 |
| 2.5135 | 2.3981 | 38 |
| 2.5130 | 2.3972 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aibox/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 769.00 +/- 232.34
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakka -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jakka
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Aidan8756/stephenKingModel | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-finetuned-t5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-finetuned-t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ba",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:apache-2.0",
"model-index",
"has_space"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 64 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: jakka/unitypyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AimB/konlpy_berttokenizer_helsinki | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlnet-base-rte-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.703971119133574
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-rte-finetuned
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6688
- Accuracy: 0.7040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 311 | 0.9695 | 0.6859 |
| 0.315 | 2.0 | 622 | 2.2516 | 0.6498 |
| 0.315 | 3.0 | 933 | 2.0439 | 0.7076 |
| 0.1096 | 4.0 | 1244 | 2.5190 | 0.7040 |
| 0.0368 | 5.0 | 1555 | 2.6688 | 0.7040 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AimB/mT5-en-kr-aihub-netflix | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 937.65 +/- 268.02
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AimB/mT5-en-kr-opus | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new3_0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new3_0045
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5095
- Validation Loss: 2.3923
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5407 | 2.4254 | 0 |
| 2.5399 | 2.4247 | 1 |
| 2.5391 | 2.4238 | 2 |
| 2.5383 | 2.4232 | 3 |
| 2.5375 | 2.4210 | 4 |
| 2.5368 | 2.4210 | 5 |
| 2.5361 | 2.4197 | 6 |
| 2.5353 | 2.4193 | 7 |
| 2.5345 | 2.4191 | 8 |
| 2.5339 | 2.4177 | 9 |
| 2.5332 | 2.4188 | 10 |
| 2.5324 | 2.4160 | 11 |
| 2.5317 | 2.4164 | 12 |
| 2.5309 | 2.4145 | 13 |
| 2.5302 | 2.4153 | 14 |
| 2.5295 | 2.4139 | 15 |
| 2.5288 | 2.4134 | 16 |
| 2.5282 | 2.4123 | 17 |
| 2.5274 | 2.4116 | 18 |
| 2.5267 | 2.4110 | 19 |
| 2.5259 | 2.4106 | 20 |
| 2.5251 | 2.4097 | 21 |
| 2.5244 | 2.4074 | 22 |
| 2.5238 | 2.4078 | 23 |
| 2.5232 | 2.4072 | 24 |
| 2.5223 | 2.4062 | 25 |
| 2.5217 | 2.4054 | 26 |
| 2.5211 | 2.4057 | 27 |
| 2.5204 | 2.4044 | 28 |
| 2.5197 | 2.4026 | 29 |
| 2.5189 | 2.4017 | 30 |
| 2.5182 | 2.4026 | 31 |
| 2.5176 | 2.4012 | 32 |
| 2.5168 | 2.4013 | 33 |
| 2.5161 | 2.3990 | 34 |
| 2.5154 | 2.3999 | 35 |
| 2.5149 | 2.3978 | 36 |
| 2.5142 | 2.3981 | 37 |
| 2.5135 | 2.3981 | 38 |
| 2.5130 | 2.3972 | 39 |
| 2.5123 | 2.3957 | 40 |
| 2.5116 | 2.3940 | 41 |
| 2.5108 | 2.3933 | 42 |
| 2.5103 | 2.3927 | 43 |
| 2.5095 | 2.3923 | 44 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Akash7897/my-newtokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1062.16 +/- 221.84
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Akash7897/test-clm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-07-24T23:40:16Z | ---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pong-reinforce
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Akashpb13/xlsr_hungarian_new | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hu",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new3_0060
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new3_0060
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5002
- Validation Loss: 2.3821
- Epoch: 59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5407 | 2.4254 | 0 |
| 2.5399 | 2.4247 | 1 |
| 2.5391 | 2.4238 | 2 |
| 2.5383 | 2.4232 | 3 |
| 2.5375 | 2.4210 | 4 |
| 2.5368 | 2.4210 | 5 |
| 2.5361 | 2.4197 | 6 |
| 2.5353 | 2.4193 | 7 |
| 2.5345 | 2.4191 | 8 |
| 2.5339 | 2.4177 | 9 |
| 2.5332 | 2.4188 | 10 |
| 2.5324 | 2.4160 | 11 |
| 2.5317 | 2.4164 | 12 |
| 2.5309 | 2.4145 | 13 |
| 2.5302 | 2.4153 | 14 |
| 2.5295 | 2.4139 | 15 |
| 2.5288 | 2.4134 | 16 |
| 2.5282 | 2.4123 | 17 |
| 2.5274 | 2.4116 | 18 |
| 2.5267 | 2.4110 | 19 |
| 2.5259 | 2.4106 | 20 |
| 2.5251 | 2.4097 | 21 |
| 2.5244 | 2.4074 | 22 |
| 2.5238 | 2.4078 | 23 |
| 2.5232 | 2.4072 | 24 |
| 2.5223 | 2.4062 | 25 |
| 2.5217 | 2.4054 | 26 |
| 2.5211 | 2.4057 | 27 |
| 2.5204 | 2.4044 | 28 |
| 2.5197 | 2.4026 | 29 |
| 2.5189 | 2.4017 | 30 |
| 2.5182 | 2.4026 | 31 |
| 2.5176 | 2.4012 | 32 |
| 2.5168 | 2.4013 | 33 |
| 2.5161 | 2.3990 | 34 |
| 2.5154 | 2.3999 | 35 |
| 2.5149 | 2.3978 | 36 |
| 2.5142 | 2.3981 | 37 |
| 2.5135 | 2.3981 | 38 |
| 2.5130 | 2.3972 | 39 |
| 2.5123 | 2.3957 | 40 |
| 2.5116 | 2.3940 | 41 |
| 2.5108 | 2.3933 | 42 |
| 2.5103 | 2.3927 | 43 |
| 2.5095 | 2.3923 | 44 |
| 2.5090 | 2.3918 | 45 |
| 2.5083 | 2.3914 | 46 |
| 2.5078 | 2.3905 | 47 |
| 2.5070 | 2.3888 | 48 |
| 2.5062 | 2.3894 | 49 |
| 2.5058 | 2.3898 | 50 |
| 2.5051 | 2.3868 | 51 |
| 2.5045 | 2.3873 | 52 |
| 2.5041 | 2.3872 | 53 |
| 2.5035 | 2.3859 | 54 |
| 2.5027 | 2.3850 | 55 |
| 2.5020 | 2.3851 | 56 |
| 2.5016 | 2.3833 | 57 |
| 2.5009 | 2.3816 | 58 |
| 2.5002 | 2.3821 | 59 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Akashpb13/xlsr_kurmanji_kurdish | [
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"kmr",
"ku",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nlp-esg-scoring/bert-base-finetuned-esg-a4s-clean
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-a4s-clean
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5224
- Validation Loss: 2.2196
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -824, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5170 | 2.3060 | 0 |
| 2.5229 | 2.3220 | 1 |
| 2.5077 | 2.3155 | 2 |
| 2.5059 | 2.3151 | 3 |
| 2.5052 | 2.2596 | 4 |
| 2.5250 | 2.4044 | 5 |
| 2.5120 | 2.2901 | 6 |
| 2.5042 | 2.2847 | 7 |
| 2.4972 | 2.3168 | 8 |
| 2.5224 | 2.2196 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aklily/Lilys | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- finer-139
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bertiny-finetuned-finer-full
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: finer-139
type: finer-139
args: finer-139
metrics:
- name: Precision
type: precision
value: 0.555368475586064
- name: Recall
type: recall
value: 0.5164398410213176
- name: F1
type: f1
value: 0.5351972041937094
- name: Accuracy
type: accuracy
value: 0.988733187308122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertiny-finetuned-finer-full
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the 10% of finer-139 dataset for 40 epochs according to paper.
It achieves the following results on the evaluation set:
- Loss: 0.0788
- Precision: 0.5554
- Recall: 0.5164
- F1: 0.5352
- Accuracy: 0.9887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 11255 | 0.0929 | 1.0 | 0.0001 | 0.0002 | 0.9843 |
| 0.08 | 2.0 | 22510 | 0.0840 | 0.4626 | 0.0730 | 0.1261 | 0.9851 |
| 0.0759 | 3.0 | 33765 | 0.0750 | 0.5113 | 0.2035 | 0.2912 | 0.9865 |
| 0.0569 | 4.0 | 45020 | 0.0673 | 0.4973 | 0.3281 | 0.3953 | 0.9872 |
| 0.0488 | 5.0 | 56275 | 0.0635 | 0.5289 | 0.3749 | 0.4388 | 0.9878 |
| 0.0422 | 6.0 | 67530 | 0.0606 | 0.5258 | 0.4068 | 0.4587 | 0.9880 |
| 0.0364 | 7.0 | 78785 | 0.0600 | 0.5588 | 0.4186 | 0.4787 | 0.9883 |
| 0.0307 | 8.0 | 90040 | 0.0589 | 0.5223 | 0.4916 | 0.5065 | 0.9883 |
| 0.0284 | 9.0 | 101295 | 0.0595 | 0.5588 | 0.4813 | 0.5171 | 0.9887 |
| 0.0255 | 10.0 | 112550 | 0.0597 | 0.5606 | 0.4944 | 0.5254 | 0.9888 |
| 0.0223 | 11.0 | 123805 | 0.0600 | 0.5533 | 0.4998 | 0.5252 | 0.9888 |
| 0.0228 | 12.0 | 135060 | 0.0608 | 0.5290 | 0.5228 | 0.5259 | 0.9885 |
| 0.0225 | 13.0 | 146315 | 0.0612 | 0.5480 | 0.5111 | 0.5289 | 0.9887 |
| 0.0204 | 14.0 | 157570 | 0.0634 | 0.5646 | 0.5120 | 0.5370 | 0.9890 |
| 0.0176 | 15.0 | 168825 | 0.0639 | 0.5611 | 0.5135 | 0.5363 | 0.9889 |
| 0.0167 | 16.0 | 180080 | 0.0647 | 0.5631 | 0.5120 | 0.5363 | 0.9888 |
| 0.0161 | 17.0 | 191335 | 0.0665 | 0.5607 | 0.5081 | 0.5331 | 0.9889 |
| 0.0145 | 18.0 | 202590 | 0.0673 | 0.5437 | 0.5280 | 0.5357 | 0.9887 |
| 0.0166 | 19.0 | 213845 | 0.0687 | 0.5722 | 0.5008 | 0.5341 | 0.9889 |
| 0.0155 | 20.0 | 225100 | 0.0685 | 0.5325 | 0.5337 | 0.5331 | 0.9885 |
| 0.0142 | 21.0 | 236355 | 0.0705 | 0.5626 | 0.5166 | 0.5386 | 0.9890 |
| 0.0127 | 22.0 | 247610 | 0.0694 | 0.5426 | 0.5358 | 0.5392 | 0.9887 |
| 0.0112 | 23.0 | 258865 | 0.0721 | 0.5591 | 0.5129 | 0.5351 | 0.9888 |
| 0.0123 | 24.0 | 270120 | 0.0733 | 0.5715 | 0.5081 | 0.5380 | 0.9889 |
| 0.0116 | 25.0 | 281375 | 0.0735 | 0.5621 | 0.5123 | 0.5361 | 0.9888 |
| 0.0112 | 26.0 | 292630 | 0.0739 | 0.5634 | 0.5181 | 0.5398 | 0.9889 |
| 0.0108 | 27.0 | 303885 | 0.0753 | 0.5548 | 0.5155 | 0.5344 | 0.9887 |
| 0.0125 | 28.0 | 315140 | 0.0746 | 0.5507 | 0.5221 | 0.5360 | 0.9886 |
| 0.0093 | 29.0 | 326395 | 0.0762 | 0.5602 | 0.5156 | 0.5370 | 0.9888 |
| 0.0094 | 30.0 | 337650 | 0.0762 | 0.5625 | 0.5157 | 0.5381 | 0.9889 |
| 0.0117 | 31.0 | 348905 | 0.0767 | 0.5519 | 0.5195 | 0.5352 | 0.9887 |
| 0.0091 | 32.0 | 360160 | 0.0772 | 0.5501 | 0.5198 | 0.5345 | 0.9887 |
| 0.0109 | 33.0 | 371415 | 0.0775 | 0.5635 | 0.5097 | 0.5353 | 0.9888 |
| 0.0094 | 34.0 | 382670 | 0.0776 | 0.5467 | 0.5216 | 0.5339 | 0.9887 |
| 0.009 | 35.0 | 393925 | 0.0782 | 0.5601 | 0.5139 | 0.5360 | 0.9889 |
| 0.0093 | 36.0 | 405180 | 0.0780 | 0.5568 | 0.5156 | 0.5354 | 0.9888 |
| 0.0087 | 37.0 | 416435 | 0.0783 | 0.5588 | 0.5143 | 0.5356 | 0.9888 |
| 0.009 | 38.0 | 427690 | 0.0785 | 0.5483 | 0.5178 | 0.5326 | 0.9887 |
| 0.0094 | 39.0 | 438945 | 0.0787 | 0.5541 | 0.5154 | 0.5340 | 0.9887 |
| 0.0088 | 40.0 | 450200 | 0.0788 | 0.5554 | 0.5164 | 0.5352 | 0.9887 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AkshaySg/GrammarCorrection | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_accent_germany-0_austria-10_s377
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AkshaySg/LanguageIdentification | [
"multilingual",
"dataset:VoxLingua107",
"LID",
"spoken language recognition",
"license:apache-2.0"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-agu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-agu
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 3.7357 | 1.0 | 13655 | 3.6781 |
| 3.5721 | 2.0 | 27310 | 3.5302 |
| 3.4961 | 3.0 | 40965 | 3.4658 |
| 3.4406 | 4.0 | 54620 | 3.4242 |
| 3.4043 | 5.0 | 68275 | 3.3943 |
| 3.3789 | 6.0 | 81930 | 3.3726 |
| 3.3576 | 7.0 | 95585 | 3.3538 |
| 3.3389 | 8.0 | 109240 | 3.3389 |
| 3.3151 | 9.0 | 122895 | 3.3270 |
| 3.314 | 5.0 | 136545 | 3.3226 |
| 3.3044 | 6.0 | 163854 | 3.3124 |
| 3.2931 | 7.0 | 191163 | 3.3078 |
| 3.2874 | 8.0 | 218472 | 3.3094 |
| 3.2817 | 9.0 | 245781 | 3.2943 |
| 3.269 | 10.0 | 273090 | 3.2785 |
| 3.2423 | 11.0 | 300399 | 3.2651 |
| 3.2253 | 12.0 | 327708 | 3.2530 |
| 3.2096 | 13.0 | 355017 | 3.2435 |
| 3.1939 | 14.0 | 382326 | 3.2326 |
| 3.1786 | 15.0 | 409635 | 3.2225 |
| 3.1625 | 16.0 | 436944 | 3.2198 |
| 3.1619 | 17.0 | 464253 | 3.2180 |
| 3.1521 | 18.0 | 491562 | 3.2164 |
| 3.1555 | 19.0 | 518871 | 3.2152 |
| 3.1523 | 20.0 | 546180 | 3.2164 |
| 3.1639 | 21.0 | 573489 | 3.2133 |
| 3.1483 | 22.0 | 600798 | 3.2113 |
| 3.1497 | 23.0 | 628107 | 3.2077 |
| 3.1468 | 24.0 | 655416 | 3.2066 |
| 3.1461 | 25.0 | 682725 | 3.2052 |
| 3.1391 | 26.0 | 710034 | 3.2039 |
| 3.1384 | 27.0 | 737343 | 3.2031 |
| 3.135 | 28.0 | 764652 | 3.2020 |
| 3.1262 | 29.0 | 791961 | 3.2015 |
| 3.1357 | 30.0 | 819270 | 3.2019 |
| 3.1372 | 31.0 | 846579 | 3.2003 |
| 3.1346 | 32.0 | 873888 | 3.1988 |
| 3.134 | 33.0 | 901197 | 3.1975 |
| 3.1256 | 34.0 | 928506 | 3.1965 |
| 3.1261 | 35.0 | 955815 | 3.1950 |
| 3.1255 | 36.0 | 983124 | 3.1945 |
| 3.1278 | 37.0 | 1010433 | 3.1940 |
| 3.1186 | 38.0 | 1037742 | 3.1934 |
| 3.1136 | 39.0 | 1065051 | 3.1932 |
| 3.12 | 40.0 | 1092360 | 3.1931 |
| 3.12 | 41.0 | 1119669 | 3.1930 |
| 3.1165 | 42.0 | 1146978 | 3.1914 |
| 3.1166 | 43.0 | 1174287 | 3.1900 |
| 3.1139 | 44.0 | 1201596 | 3.1892 |
| 3.1135 | 45.0 | 1228905 | 3.1885 |
| 3.1077 | 46.0 | 1256214 | 3.1881 |
| 3.1097 | 47.0 | 1283523 | 3.1873 |
| 3.1076 | 48.0 | 1310832 | 3.1872 |
| 3.102 | 49.0 | 1338141 | 3.1870 |
| 3.1086 | 50.0 | 1365450 | 3.1869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AlbertHSU/BertTEST | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_accent_germany-8_austria-2_s445
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlbertHSU/ChineseFoodBert | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5489250601752835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8115
- Matthews Correlation: 0.5489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5223 | 1.0 | 535 | 0.5400 | 0.4165 |
| 0.349 | 2.0 | 1070 | 0.5125 | 0.4738 |
| 0.2392 | 3.0 | 1605 | 0.5283 | 0.5411 |
| 0.1791 | 4.0 | 2140 | 0.7506 | 0.5301 |
| 0.127 | 5.0 | 2675 | 0.8115 | 0.5489 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ale/Alen | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nshenk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Aleksandar/distilbert-srb-ner-setimes-lr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-0_female-10_s601
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar/distilbert-srb-ner-setimes | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-0_female-10_s801
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar/distilbert-srb-ner | [
"pytorch",
"distilbert",
"token-classification",
"sr",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-0_female-10_s889
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar/electra-srb-ner-setimes-lr | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-10_female-0_s325
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar/electra-srb-oscar | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-10_female-0_s75
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar1932/distilgpt2-rock | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-2_female-8_s108
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar1932/gpt2-hip-hop | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-2_female-8_s211
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar1932/gpt2-pop | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-2_female-8_s364
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Aleksandar1932/gpt2-soul | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new3_0075
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new3_0075
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4912
- Validation Loss: 2.3729
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.5407 | 2.4254 | 0 |
| 2.5399 | 2.4247 | 1 |
| 2.5391 | 2.4238 | 2 |
| 2.5383 | 2.4232 | 3 |
| 2.5375 | 2.4210 | 4 |
| 2.5368 | 2.4210 | 5 |
| 2.5361 | 2.4197 | 6 |
| 2.5353 | 2.4193 | 7 |
| 2.5345 | 2.4191 | 8 |
| 2.5339 | 2.4177 | 9 |
| 2.5332 | 2.4188 | 10 |
| 2.5324 | 2.4160 | 11 |
| 2.5317 | 2.4164 | 12 |
| 2.5309 | 2.4145 | 13 |
| 2.5302 | 2.4153 | 14 |
| 2.5295 | 2.4139 | 15 |
| 2.5288 | 2.4134 | 16 |
| 2.5282 | 2.4123 | 17 |
| 2.5274 | 2.4116 | 18 |
| 2.5267 | 2.4110 | 19 |
| 2.5259 | 2.4106 | 20 |
| 2.5251 | 2.4097 | 21 |
| 2.5244 | 2.4074 | 22 |
| 2.5238 | 2.4078 | 23 |
| 2.5232 | 2.4072 | 24 |
| 2.5223 | 2.4062 | 25 |
| 2.5217 | 2.4054 | 26 |
| 2.5211 | 2.4057 | 27 |
| 2.5204 | 2.4044 | 28 |
| 2.5197 | 2.4026 | 29 |
| 2.5189 | 2.4017 | 30 |
| 2.5182 | 2.4026 | 31 |
| 2.5176 | 2.4012 | 32 |
| 2.5168 | 2.4013 | 33 |
| 2.5161 | 2.3990 | 34 |
| 2.5154 | 2.3999 | 35 |
| 2.5149 | 2.3978 | 36 |
| 2.5142 | 2.3981 | 37 |
| 2.5135 | 2.3981 | 38 |
| 2.5130 | 2.3972 | 39 |
| 2.5123 | 2.3957 | 40 |
| 2.5116 | 2.3940 | 41 |
| 2.5108 | 2.3933 | 42 |
| 2.5103 | 2.3927 | 43 |
| 2.5095 | 2.3923 | 44 |
| 2.5090 | 2.3918 | 45 |
| 2.5083 | 2.3914 | 46 |
| 2.5078 | 2.3905 | 47 |
| 2.5070 | 2.3888 | 48 |
| 2.5062 | 2.3894 | 49 |
| 2.5058 | 2.3898 | 50 |
| 2.5051 | 2.3868 | 51 |
| 2.5045 | 2.3873 | 52 |
| 2.5041 | 2.3872 | 53 |
| 2.5035 | 2.3859 | 54 |
| 2.5027 | 2.3850 | 55 |
| 2.5020 | 2.3851 | 56 |
| 2.5016 | 2.3833 | 57 |
| 2.5009 | 2.3816 | 58 |
| 2.5002 | 2.3821 | 59 |
| 2.4995 | 2.3813 | 60 |
| 2.4990 | 2.3803 | 61 |
| 2.4984 | 2.3794 | 62 |
| 2.4977 | 2.3798 | 63 |
| 2.4971 | 2.3779 | 64 |
| 2.4964 | 2.3778 | 65 |
| 2.4959 | 2.3778 | 66 |
| 2.4954 | 2.3787 | 67 |
| 2.4947 | 2.3758 | 68 |
| 2.4942 | 2.3751 | 69 |
| 2.4935 | 2.3739 | 70 |
| 2.4929 | 2.3754 | 71 |
| 2.4923 | 2.3750 | 72 |
| 2.4918 | 2.3730 | 73 |
| 2.4912 | 2.3729 | 74 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/bert | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- pearsonr
model-index:
- name: bert-base-finetuned-sts
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: sts
metrics:
- name: Pearsonr
type: pearsonr
value: 0.9000373376026184
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-sts
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4582
- Pearsonr: 0.9000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearsonr |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 183 | 0.5329 | 0.8827 |
| No log | 2.0 | 366 | 0.4549 | 0.8937 |
| 0.2316 | 3.0 | 549 | 0.4656 | 0.8959 |
| 0.2316 | 4.0 | 732 | 0.4651 | 0.8990 |
| 0.2316 | 5.0 | 915 | 0.4582 | 0.9000 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
AlekseyKorshuk/horror-scripts | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-8_female-2_s564
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlekseyKulnevich/Pegasus-HeaderGeneration | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_de_vp-100k_gender_male-8_female-2_s874
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlekseyKulnevich/Pegasus-QuestionGeneration | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlekseyKulnevich/Pegasus-Summarization | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-5_england-5_s878
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlexDemon/Alex | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-0_england-10_s227
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlexMaclean/sentence-compression-roberta | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-0_england-10_s809
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlexN/xls-r-300m-fr-0 | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- conversational
---
# Rick DialoGPT Model |
AlexN/xls-r-300m-fr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-10_england-0_s44
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AlexN/xls-r-300m-pt | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: 3_taxi_QL
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Amiri/3_taxi_QL", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AlexaRyck/KEITH | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-10_england-0_s93
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Alexander-Learn/bert-finetuned-ner-accelerate | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_accent_us-2_england-8_s251
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Alfia/anekdotes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset_version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7621 | 1.0 | 11851 | 2.5216 |
| 2.5466 | 2.0 | 23702 | 2.4157 |
| 2.4505 | 3.0 | 35553 | 2.3592 |
| 2.3798 | 4.0 | 47404 | 2.3028 |
| 2.3178 | 5.0 | 59255 | 2.2768 |
| 2.272 | 6.0 | 71106 | 2.2366 |
| 2.2323 | 7.0 | 82957 | 2.2128 |
| 2.1928 | 8.0 | 94808 | 2.1797 |
| 2.157 | 9.0 | 106659 | 2.1667 |
| 2.1292 | 10.0 | 118510 | 2.1392 |
| 2.0978 | 11.0 | 130361 | 2.1280 |
| 2.0725 | 12.0 | 142212 | 2.1106 |
| 2.052 | 13.0 | 154063 | 2.0944 |
| 2.0268 | 14.0 | 165914 | 2.0804 |
| 2.0121 | 15.0 | 177765 | 2.0698 |
| 1.9997 | 16.0 | 189616 | 2.0626 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.