modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-31 12:28:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 540
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-31 12:23:33
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
|
gokuls
| 2023-01-30T00:44:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T23:03:33Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.663195646796933
- name: F1
type: f1
value: 0.16465247530826327
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qqp
This model is a fine-tuned version of [gokuls/distilbert_sa_pre-training-complete](https://huggingface.co/gokuls/distilbert_sa_pre-training-complete) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5449
- Accuracy: 0.6632
- F1: 0.1647
- Combined Score: 0.4139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6004 | 1.0 | 1422 | 0.5643 | 0.6623 | 0.1630 | 0.4126 |
| 0.5393 | 2.0 | 2844 | 0.5498 | 0.6538 | 0.1199 | 0.3869 |
| 0.5157 | 3.0 | 4266 | 0.5449 | 0.6632 | 0.1647 | 0.4139 |
| 0.5007 | 4.0 | 5688 | 0.5512 | 0.6848 | 0.2663 | 0.4755 |
| 0.4914 | 5.0 | 7110 | 0.5501 | 0.6665 | 0.1817 | 0.4241 |
| 0.4847 | 6.0 | 8532 | 0.5475 | 0.6816 | 0.2517 | 0.4667 |
| 0.4803 | 7.0 | 9954 | 0.5478 | 0.6768 | 0.2301 | 0.4535 |
| 0.4768 | 8.0 | 11376 | 0.5488 | 0.6839 | 0.2610 | 0.4724 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_qqp_384
|
gokuls
| 2023-01-30T00:41:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:33:09Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_qqp_384
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.6454365570121197
- name: F1
type: f1
value: 0.07878671036565774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_qqp_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6771
- Accuracy: 0.6454
- F1: 0.0788
- Combined Score: 0.3621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.7984 | 1.0 | 1422 | 0.7600 | 0.6318 | 0.0 | 0.3159 |
| 0.7388 | 2.0 | 2844 | 0.7348 | 0.6318 | 0.0 | 0.3159 |
| 0.7037 | 3.0 | 4266 | 0.7082 | 0.6329 | 0.0056 | 0.3192 |
| 0.6717 | 4.0 | 5688 | 0.7014 | 0.6474 | 0.0908 | 0.3691 |
| 0.6462 | 5.0 | 7110 | 0.6841 | 0.6377 | 0.0339 | 0.3358 |
| 0.6259 | 6.0 | 8532 | 0.6795 | 0.6382 | 0.0364 | 0.3373 |
| 0.6092 | 7.0 | 9954 | 0.6782 | 0.6408 | 0.0513 | 0.3461 |
| 0.5941 | 8.0 | 11376 | 0.6771 | 0.6454 | 0.0788 | 0.3621 |
| 0.5812 | 9.0 | 12798 | 0.6841 | 0.6492 | 0.0991 | 0.3741 |
| 0.5703 | 10.0 | 14220 | 0.6774 | 0.6452 | 0.0776 | 0.3614 |
| 0.5604 | 11.0 | 15642 | 0.6791 | 0.6464 | 0.0831 | 0.3647 |
| 0.5523 | 12.0 | 17064 | 0.6817 | 0.6520 | 0.1143 | 0.3831 |
| 0.5448 | 13.0 | 18486 | 0.6774 | 0.6477 | 0.0905 | 0.3691 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lckidwell/embeddings
|
lckidwell
| 2023-01-30T00:08:14Z | 0 | 0 | null |
[
"license:cc-by-3.0",
"region:us"
] | null | 2023-01-24T22:55:52Z |
---
license: cc-by-3.0
---
# Embeddings
A collection of embeddings I've created.
### Araknope
A stable diffusion embedding trained on a collection of high resolution macro photos of spiders.
**Trigger**: `araknope`
### Beez
A stable diffusion embedding trained on a collection of high resolution macro photos of bees.
**Trigger**: `beez`
### Pmantis
A stable diffusion embedding trained on a collection of high resolution macro photos of praying mantises.
**Trigger**: `pmantis`
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli
|
gokuls
| 2023-01-29T23:52:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T23:14:52Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6522057477576423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_pretrain_qnli
This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3579
- Accuracy: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4059 | 1.0 | 410 | 0.4016 | 0.5585 |
| 0.3907 | 2.0 | 820 | 0.3735 | 0.6094 |
| 0.3715 | 3.0 | 1230 | 0.3602 | 0.6480 |
| 0.352 | 4.0 | 1640 | 0.3579 | 0.6522 |
| 0.3314 | 5.0 | 2050 | 0.3626 | 0.6670 |
| 0.309 | 6.0 | 2460 | 0.3650 | 0.6776 |
| 0.2865 | 7.0 | 2870 | 0.3799 | 0.6776 |
| 0.2679 | 8.0 | 3280 | 0.3817 | 0.6903 |
| 0.2525 | 9.0 | 3690 | 0.3942 | 0.6822 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_qnli
|
gokuls
| 2023-01-29T23:45:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T23:00:02Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_pretrain_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4946000366099213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_pretrain_qnli
This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.4946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 819 | nan | 0.4946 |
| 0.0 | 2.0 | 1638 | nan | 0.4946 |
| 0.0 | 3.0 | 2457 | nan | 0.4946 |
| 0.0 | 4.0 | 3276 | nan | 0.4946 |
| 0.0 | 5.0 | 4095 | nan | 0.4946 |
| 0.0 | 6.0 | 4914 | nan | 0.4946 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_qnli
|
gokuls
| 2023-01-29T23:37:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:20:54Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.615595826468973
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9573
- Accuracy: 0.6156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0984 | 1.0 | 819 | 0.9626 | 0.6220 |
| 1.0171 | 2.0 | 1638 | 0.9573 | 0.6156 |
| 0.9717 | 3.0 | 2457 | 0.9651 | 0.6105 |
| 0.9377 | 4.0 | 3276 | 0.9713 | 0.6024 |
| 0.9132 | 5.0 | 4095 | 0.9812 | 0.5988 |
| 0.89 | 6.0 | 4914 | 1.0108 | 0.5982 |
| 0.8683 | 7.0 | 5733 | 1.0290 | 0.5914 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_pretrain_mrpc
|
gokuls
| 2023-01-29T23:13:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T23:09:56Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_add_GLUE_Experiment_logit_kd_pretrain_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.3161764705882353
- name: F1
type: f1
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_pretrain_mrpc
This model is a fine-tuned version of [gokuls/distilbert_add_pre-training-complete](https://huggingface.co/gokuls/distilbert_add_pre-training-complete) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
- Accuracy: 0.3162
- F1: 0.0
- Combined Score: 0.1581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.534 | 1.0 | 15 | 0.5287 | 0.3162 | 0.0 | 0.1581 |
| 0.5294 | 2.0 | 30 | 0.5264 | 0.3162 | 0.0 | 0.1581 |
| 0.5212 | 3.0 | 45 | 0.5237 | 0.3162 | 0.0 | 0.1581 |
| 0.5174 | 4.0 | 60 | 0.5206 | 0.3162 | 0.0 | 0.1581 |
| 0.5075 | 5.0 | 75 | 0.5294 | 0.3162 | 0.0 | 0.1581 |
| 0.5017 | 6.0 | 90 | 0.5229 | 0.3162 | 0.0 | 0.1581 |
| 0.4906 | 7.0 | 105 | 0.5413 | 0.3162 | 0.0 | 0.1581 |
| 0.4756 | 8.0 | 120 | 0.5384 | 0.4828 | 0.4738 | 0.4783 |
| 0.4605 | 9.0 | 135 | 0.5587 | 0.3480 | 0.1419 | 0.2450 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BigSalmon/DefinitionsSynonyms3
|
BigSalmon
| 2023-01-29T23:08:48Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-24T17:52:00Z |
given a definition, it will generate its corresponding word. there are several formats you can do it in:
```
part of speech- verb
definition: grow less in intensity or degree
ex. rather than leave immediately and be drenched, they waited for the storm to ________
synonyms: subside; moderate; decrease
antonyms: increase
word: abate
```
```
[adjective]
skeptical, disbelieving
Her eyes widened _____ly at the shocking news.
word: incredulous
```
```
the money or other means needed for a particular purpose
wordy: wherewithal
```
you can also fill in the blank:
```
due to the relentless pursuit of excellence, the [blank] of the firm is unquestioned [sep] preeminence [answer]
the hotel chain has [blank] its logo in an effort to appeal to younger travelers [sep] redesigned [answer]
```
to generate definitions, too:
```
harass | (v.) to disturb, worry; to trouble by repeated attacks
syn: annoy, pester, bedevil, beleaguer
inhibit | (v.) to restrain or hold back; to hinder or arrest; to prohibit
syn: repress, check, suppress
ant: foster, promote, expedite, facilitate
```
informal definitions:
```
synonyms: digression, extraneous, tangential.
description: when something is irrelevant but mentioned anyways.
***
synonyms: botched, fumbled, was unequal to the task, did not rise to the occasion.
description: did a really bad job at handling something.
```
```
description: did a really bad job at handling something.
synonyms: botched, fumbled, was unequal to the task, did not rise to the occasion.
***
description: when something is irrelevant but mentioned anyways.
synonyms: digression, extraneous, tangential.
```
```
question: michael is an ardent supporter of his presidential candidate.
what does "ardent" mean in the context of the selection?
answer: enthusiastic
```
```
dating back to the early twentieth century, the new york yankees have [blank] over american baseball. [sep] reigned [answer]
```
```
ideas: in modern-day america, it is customary for the commander-in-chief to conduct regular press conferences
related keywords: transparency, check and balance, sacrosanct, public accountability, adversarial, unscripted, direct access, open government, watchdog, healthy democracy, institutional integrity, right to know, direct line of communication, behind closed doors, updates, track progress, instill confidence, reassure, humanize, leadership style, day-to-day, forthcoming, demystify, ask hard questions
```
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_qnli_256
|
gokuls
| 2023-01-29T23:07:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:13:23Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_qnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6163280248947465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_qnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9616
- Accuracy: 0.6163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1009 | 1.0 | 819 | 0.9865 | 0.5988 |
| 1.019 | 2.0 | 1638 | 0.9616 | 0.6163 |
| 0.9743 | 3.0 | 2457 | 0.9672 | 0.6134 |
| 0.942 | 4.0 | 3276 | 0.9724 | 0.6070 |
| 0.9189 | 5.0 | 4095 | 0.9827 | 0.6017 |
| 0.898 | 6.0 | 4914 | 1.0090 | 0.5958 |
| 0.8798 | 7.0 | 5733 | 1.0317 | 0.5967 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
odysseyofmayhem/coreml-stable-diffusion-2-1-base
|
odysseyofmayhem
| 2023-01-29T23:05:27Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-29T17:08:39Z |
---
license: creativeml-openrail-m
---
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qnli
|
gokuls
| 2023-01-29T23:00:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:34:19Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8735127219476478
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_pretrain_qnli
This model is a fine-tuned version of [gokuls/distilbert_sa_pre-training-complete](https://huggingface.co/gokuls/distilbert_sa_pre-training-complete) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2515
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.303 | 1.0 | 410 | 0.2569 | 0.8651 |
| 0.2557 | 2.0 | 820 | 0.2515 | 0.8735 |
| 0.2357 | 3.0 | 1230 | 0.2556 | 0.8828 |
| 0.2222 | 4.0 | 1640 | 0.2562 | 0.8847 |
| 0.2146 | 5.0 | 2050 | 0.2547 | 0.8869 |
| 0.2098 | 6.0 | 2460 | 0.2585 | 0.8803 |
| 0.2069 | 7.0 | 2870 | 0.2588 | 0.8849 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
amoselberg/q-FrozenLake-v1-4x4-noSlippery
|
amoselberg
| 2023-01-29T23:00:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T22:57:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="amoselberg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
u-phoria/ppo-LunarLander-v2
|
u-phoria
| 2023-01-29T22:56:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-28T10:40:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.19 +/- 17.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
paicup09/a2c-AntBulletEnv-v0
|
paicup09
| 2023-01-29T22:56:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T22:54:53Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1874.81 +/- 215.83
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_cola
|
gokuls
| 2023-01-29T22:55:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:50:38Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_pretrain_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_pretrain_cola
This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.0 | 1.0 | 67 | nan | 0.0 |
| 0.0 | 2.0 | 134 | nan | 0.0 |
| 0.0 | 3.0 | 201 | nan | 0.0 |
| 0.0 | 4.0 | 268 | nan | 0.0 |
| 0.0 | 5.0 | 335 | nan | 0.0 |
| 0.0 | 6.0 | 402 | nan | 0.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mrpc
|
gokuls
| 2023-01-29T22:38:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:30:13Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.8993055555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_pretrain_mrpc
This model is a fine-tuned version of [gokuls/mobilebert_sa_pre-training-complete](https://huggingface.co/gokuls/mobilebert_sa_pre-training-complete) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2291
- Accuracy: 0.8578
- F1: 0.8993
- Combined Score: 0.8786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.536 | 1.0 | 29 | 0.4134 | 0.7279 | 0.8284 | 0.7782 |
| 0.3419 | 2.0 | 58 | 0.3005 | 0.8284 | 0.8801 | 0.8543 |
| 0.2413 | 3.0 | 87 | 0.2707 | 0.8235 | 0.8780 | 0.8507 |
| 0.1852 | 4.0 | 116 | 0.3247 | 0.8284 | 0.8837 | 0.8561 |
| 0.1524 | 5.0 | 145 | 0.2856 | 0.8431 | 0.8900 | 0.8666 |
| 0.1297 | 6.0 | 174 | 0.2999 | 0.8456 | 0.8948 | 0.8702 |
| 0.1219 | 7.0 | 203 | 0.2797 | 0.8529 | 0.8986 | 0.8758 |
| 0.1141 | 8.0 | 232 | 0.2462 | 0.8603 | 0.9005 | 0.8804 |
| 0.1127 | 9.0 | 261 | 0.2557 | 0.8578 | 0.8982 | 0.8780 |
| 0.1091 | 10.0 | 290 | 0.2853 | 0.8480 | 0.8967 | 0.8724 |
| 0.1007 | 11.0 | 319 | 0.2472 | 0.8554 | 0.8981 | 0.8767 |
| 0.0979 | 12.0 | 348 | 0.2431 | 0.8505 | 0.8950 | 0.8727 |
| 0.0954 | 13.0 | 377 | 0.2456 | 0.8578 | 0.9007 | 0.8793 |
| 0.0946 | 14.0 | 406 | 0.2526 | 0.8578 | 0.9017 | 0.8798 |
| 0.0946 | 15.0 | 435 | 0.2291 | 0.8578 | 0.8993 | 0.8786 |
| 0.0938 | 16.0 | 464 | 0.2452 | 0.8603 | 0.9029 | 0.8816 |
| 0.0919 | 17.0 | 493 | 0.2365 | 0.8652 | 0.9050 | 0.8851 |
| 0.0916 | 18.0 | 522 | 0.2363 | 0.8652 | 0.9060 | 0.8856 |
| 0.0915 | 19.0 | 551 | 0.2432 | 0.8652 | 0.9063 | 0.8857 |
| 0.0905 | 20.0 | 580 | 0.2297 | 0.8652 | 0.9057 | 0.8854 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_qnli_192
|
gokuls
| 2023-01-29T22:35:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:12:08Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_qnli_192
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5870400878638111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_qnli_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3931
- Accuracy: 0.5870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4083 | 1.0 | 410 | 0.3946 | 0.5735 |
| 0.3936 | 2.0 | 820 | 0.3931 | 0.5870 |
| 0.3843 | 3.0 | 1230 | 0.3935 | 0.5863 |
| 0.3766 | 4.0 | 1640 | 0.3980 | 0.5858 |
| 0.3699 | 5.0 | 2050 | 0.3996 | 0.5781 |
| 0.3636 | 6.0 | 2460 | 0.4112 | 0.5795 |
| 0.3572 | 7.0 | 2870 | 0.4269 | 0.5667 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola_96
|
gokuls
| 2023-01-29T22:12:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:04:44Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_cola_96
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0437601222642778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_cola_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6770
- Matthews Correlation: 0.0438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.915 | 1.0 | 34 | 0.7697 | 0.0 |
| 0.8596 | 2.0 | 68 | 0.7301 | 0.0 |
| 0.826 | 3.0 | 102 | 0.7022 | 0.0 |
| 0.8072 | 4.0 | 136 | 0.6883 | 0.0 |
| 0.7996 | 5.0 | 170 | 0.6846 | 0.0 |
| 0.7958 | 6.0 | 204 | 0.6840 | 0.0 |
| 0.7977 | 7.0 | 238 | 0.6840 | 0.0 |
| 0.7973 | 8.0 | 272 | 0.6840 | 0.0 |
| 0.7954 | 9.0 | 306 | 0.6839 | 0.0 |
| 0.7963 | 10.0 | 340 | 0.6837 | 0.0 |
| 0.795 | 11.0 | 374 | 0.6817 | 0.0 |
| 0.7664 | 12.0 | 408 | 0.6770 | 0.0438 |
| 0.7144 | 13.0 | 442 | 0.6875 | 0.1060 |
| 0.6788 | 14.0 | 476 | 0.6928 | 0.0970 |
| 0.648 | 15.0 | 510 | 0.7124 | 0.1017 |
| 0.6288 | 16.0 | 544 | 0.7151 | 0.1005 |
| 0.613 | 17.0 | 578 | 0.7161 | 0.0812 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mrpc_192
|
gokuls
| 2023-01-29T22:10:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:07:20Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_mrpc_192
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.3382352941176471
- name: F1
type: f1
value: 0.08163265306122451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_mrpc_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5189
- Accuracy: 0.3382
- F1: 0.0816
- Combined Score: 0.2099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5329 | 1.0 | 15 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5309 | 2.0 | 30 | 0.5294 | 0.3162 | 0.0 | 0.1581 |
| 0.5291 | 3.0 | 45 | 0.5292 | 0.3162 | 0.0 | 0.1581 |
| 0.5286 | 4.0 | 60 | 0.5288 | 0.3162 | 0.0 | 0.1581 |
| 0.5269 | 5.0 | 75 | 0.5277 | 0.3162 | 0.0 | 0.1581 |
| 0.5255 | 6.0 | 90 | 0.5246 | 0.3162 | 0.0 | 0.1581 |
| 0.5157 | 7.0 | 105 | 0.5189 | 0.3382 | 0.0816 | 0.2099 |
| 0.5037 | 8.0 | 120 | 0.5221 | 0.3284 | 0.0486 | 0.1885 |
| 0.4859 | 9.0 | 135 | 0.5277 | 0.4681 | 0.4151 | 0.4416 |
| 0.4683 | 10.0 | 150 | 0.5407 | 0.5882 | 0.6364 | 0.6123 |
| 0.4558 | 11.0 | 165 | 0.5487 | 0.4951 | 0.4772 | 0.4861 |
| 0.4439 | 12.0 | 180 | 0.5611 | 0.5319 | 0.5527 | 0.5423 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola_256
|
gokuls
| 2023-01-29T22:10:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:05:45Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_cola_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_cola_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6808
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.8053 | 1.0 | 34 | 0.6856 | 0.0 |
| 0.7977 | 2.0 | 68 | 0.6837 | 0.0 |
| 0.7952 | 3.0 | 102 | 0.6832 | 0.0 |
| 0.7934 | 4.0 | 136 | 0.6852 | 0.0 |
| 0.7703 | 5.0 | 170 | 0.6808 | 0.0 |
| 0.7008 | 6.0 | 204 | 0.6885 | 0.0675 |
| 0.6386 | 7.0 | 238 | 0.7263 | 0.1037 |
| 0.6059 | 8.0 | 272 | 0.7450 | 0.0825 |
| 0.577 | 9.0 | 306 | 0.7559 | 0.1071 |
| 0.5531 | 10.0 | 340 | 0.7794 | 0.1048 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_cola
|
gokuls
| 2023-01-29T22:09:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T21:57:50Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_cola
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6788
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.8105 | 1.0 | 67 | 0.6861 | 0.0 |
| 0.7967 | 2.0 | 134 | 0.6866 | 0.0 |
| 0.7956 | 3.0 | 201 | 0.6836 | 0.0 |
| 0.791 | 4.0 | 268 | 0.6788 | 0.0 |
| 0.7253 | 5.0 | 335 | 0.7158 | 0.0821 |
| 0.6322 | 6.0 | 402 | 0.6942 | 0.0650 |
| 0.5874 | 7.0 | 469 | 0.7295 | 0.0803 |
| 0.556 | 8.0 | 536 | 0.7735 | 0.0833 |
| 0.5308 | 9.0 | 603 | 0.7791 | 0.0970 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mrpc
|
gokuls
| 2023-01-29T22:08:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T22:03:43Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.33088235294117646
- name: F1
type: f1
value: 0.068259385665529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5187
- Accuracy: 0.3309
- F1: 0.0683
- Combined Score: 0.1996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.58 | 1.0 | 15 | 0.5281 | 0.3162 | 0.0 | 0.1581 |
| 0.5287 | 2.0 | 30 | 0.5289 | 0.3162 | 0.0 | 0.1581 |
| 0.521 | 3.0 | 45 | 0.5320 | 0.4681 | 0.4274 | 0.4478 |
| 0.5132 | 4.0 | 60 | 0.5187 | 0.3309 | 0.0683 | 0.1996 |
| 0.4907 | 5.0 | 75 | 0.5305 | 0.3578 | 0.1603 | 0.2590 |
| 0.463 | 6.0 | 90 | 0.5478 | 0.3456 | 0.1130 | 0.2293 |
| 0.4338 | 7.0 | 105 | 0.5700 | 0.4877 | 0.4736 | 0.4806 |
| 0.4246 | 8.0 | 120 | 0.6097 | 0.4902 | 0.4927 | 0.4914 |
| 0.4162 | 9.0 | 135 | 0.5776 | 0.5515 | 0.6030 | 0.5773 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_cola_128
|
gokuls
| 2023-01-29T22:06:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T21:58:28Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_cola_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_cola_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6807
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.8228 | 1.0 | 67 | 0.6863 | 0.0 |
| 0.7969 | 2.0 | 134 | 0.6870 | 0.0 |
| 0.7965 | 3.0 | 201 | 0.6834 | 0.0 |
| 0.795 | 4.0 | 268 | 0.6835 | 0.0 |
| 0.7939 | 5.0 | 335 | 0.6807 | 0.0 |
| 0.7451 | 6.0 | 402 | 0.6986 | 0.0672 |
| 0.6395 | 7.0 | 469 | 0.7051 | 0.0875 |
| 0.6042 | 8.0 | 536 | 0.7293 | 0.1094 |
| 0.5756 | 9.0 | 603 | 0.7376 | 0.1173 |
| 0.5558 | 10.0 | 670 | 0.7879 | 0.1123 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola
|
gokuls
| 2023-01-29T22:02:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T21:57:18Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: -0.020702674026557004
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6741
- Matthews Correlation: -0.0207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.814 | 1.0 | 34 | 0.6851 | 0.0 |
| 0.7923 | 2.0 | 68 | 0.6741 | -0.0207 |
| 0.7521 | 3.0 | 102 | 0.7281 | 0.0931 |
| 0.6713 | 4.0 | 136 | 0.6815 | 0.0434 |
| 0.6052 | 5.0 | 170 | 0.7829 | 0.1374 |
| 0.5654 | 6.0 | 204 | 0.7213 | 0.1027 |
| 0.5296 | 7.0 | 238 | 0.8135 | 0.0702 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_cola_384
|
gokuls
| 2023-01-29T22:02:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T21:59:12Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert_sa_GLUE_Experiment_logit_kd_cola_384
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_cola_384
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6825
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.8051 | 1.0 | 34 | 0.6842 | 0.0 |
| 0.7956 | 2.0 | 68 | 0.6825 | 0.0 |
| 0.7849 | 3.0 | 102 | 0.6839 | 0.0 |
| 0.7297 | 4.0 | 136 | 0.6828 | 0.0729 |
| 0.6561 | 5.0 | 170 | 0.7238 | 0.1064 |
| 0.6039 | 6.0 | 204 | 0.7332 | 0.0768 |
| 0.5683 | 7.0 | 238 | 0.7744 | 0.0881 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
lotek93/a2c-AntBulletEnv-v0
|
lotek93
| 2023-01-29T21:38:35Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T21:37:33Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1399.61 +/- 491.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/mobilebert_sa_pre-training-complete
|
gokuls
| 2023-01-29T21:20:14Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"fill-mask",
"generated_from_trainer",
"dataset:wikitext",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-21T12:23:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
metrics:
- accuracy
model-index:
- name: mobilebert_sa_pre-training-complete
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: wikitext wikitext-103-raw-v1
type: wikitext
config: wikitext-103-raw-v1
split: validation
args: wikitext-103-raw-v1
metrics:
- name: Accuracy
type: accuracy
value: 0.7161816392520737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_pre-training-complete
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the wikitext wikitext-103-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3239
- Accuracy: 0.7162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.6028 | 1.0 | 7145 | 1.4525 | 0.6935 |
| 1.5524 | 2.0 | 14290 | 1.4375 | 0.6993 |
| 1.5323 | 3.0 | 21435 | 1.4194 | 0.6993 |
| 1.5191 | 4.0 | 28580 | 1.4110 | 0.7027 |
| 1.5025 | 5.0 | 35725 | 1.4168 | 0.7014 |
| 1.4902 | 6.0 | 42870 | 1.3931 | 0.7012 |
| 1.4813 | 7.0 | 50015 | 1.3738 | 0.7057 |
| 1.4751 | 8.0 | 57160 | 1.4237 | 0.6996 |
| 1.4689 | 9.0 | 64305 | 1.3969 | 0.7047 |
| 1.4626 | 10.0 | 71450 | 1.3916 | 0.7068 |
| 1.4566 | 11.0 | 78595 | 1.3686 | 0.7072 |
| 1.451 | 12.0 | 85740 | 1.3811 | 0.7060 |
| 1.4478 | 13.0 | 92885 | 1.3598 | 0.7092 |
| 1.4441 | 14.0 | 100030 | 1.3790 | 0.7054 |
| 1.4379 | 15.0 | 107175 | 1.3794 | 0.7066 |
| 1.4353 | 16.0 | 114320 | 1.3609 | 0.7102 |
| 1.43 | 17.0 | 121465 | 1.3685 | 0.7083 |
| 1.4278 | 18.0 | 128610 | 1.3953 | 0.7036 |
| 1.4219 | 19.0 | 135755 | 1.3756 | 0.7085 |
| 1.4197 | 20.0 | 142900 | 1.3597 | 0.7090 |
| 1.4169 | 21.0 | 150045 | 1.3673 | 0.7061 |
| 1.4146 | 22.0 | 157190 | 1.3753 | 0.7073 |
| 1.4109 | 23.0 | 164335 | 1.3696 | 0.7082 |
| 1.4073 | 24.0 | 171480 | 1.3563 | 0.7092 |
| 1.4054 | 25.0 | 178625 | 1.3712 | 0.7103 |
| 1.402 | 26.0 | 185770 | 1.3528 | 0.7113 |
| 1.4001 | 27.0 | 192915 | 1.3367 | 0.7123 |
| 1.397 | 28.0 | 200060 | 1.3508 | 0.7118 |
| 1.3955 | 29.0 | 207205 | 1.3572 | 0.7117 |
| 1.3937 | 30.0 | 214350 | 1.3566 | 0.7095 |
| 1.3901 | 31.0 | 221495 | 1.3515 | 0.7117 |
| 1.3874 | 32.0 | 228640 | 1.3445 | 0.7118 |
| 1.386 | 33.0 | 235785 | 1.3611 | 0.7097 |
| 1.3833 | 34.0 | 242930 | 1.3502 | 0.7087 |
| 1.3822 | 35.0 | 250075 | 1.3657 | 0.7108 |
| 1.3797 | 36.0 | 257220 | 1.3576 | 0.7108 |
| 1.3793 | 37.0 | 264365 | 1.3472 | 0.7106 |
| 1.3763 | 38.0 | 271510 | 1.3323 | 0.7156 |
| 1.3762 | 39.0 | 278655 | 1.3325 | 0.7145 |
| 1.3748 | 40.0 | 285800 | 1.3243 | 0.7138 |
| 1.3733 | 41.0 | 292945 | 1.3218 | 0.7170 |
| 1.3722 | 41.99 | 300000 | 1.3074 | 0.7186 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
cfisicaro/ppo-LunarLander-v2
|
cfisicaro
| 2023-01-29T21:15:47Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T21:15:23Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.90 +/- 12.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
slomek/dqn-SpaceInvadersNoFrameskip-v4
|
slomek
| 2023-01-29T20:53:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-28T08:35:05Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 854.00 +/- 253.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga slomek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga slomek -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga slomek
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jchhabra/distilbert-base-uncased-finetuned-imdb
|
jchhabra
| 2023-01-29T20:29:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-29T20:20:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
khaled5321/a2c-AntBulletEnv-v0
|
khaled5321
| 2023-01-29T20:27:15Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T20:26:08Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1222.82 +/- 161.79
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLARA-MeD/mt5-simplification-spanish
|
CLARA-MeD
| 2023-01-29T19:26:16Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"simplification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-11T16:21:45Z |
---
license: cc-by-nc-sa-4.0
tags:
- simplification
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-simplification-spanish-clara-med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-simplification-spanish-clara-med
This model is a fine-tuned version of [oskrmiguel/mt5-simplification-spanish](https://huggingface.co/oskrmiguel/mt5-simplification-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9610
- Rouge1: 33.7922
- Rouge2: 19.5758
- Rougel: 31.3737
- Rougelsum: 31.3428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 190 | 2.6876 | 32.236 | 18.2352 | 29.7852 | 29.7539 |
| No log | 2.0 | 380 | 2.4617 | 32.8521 | 18.9712 | 30.4958 | 30.4635 |
| 3.3018 | 3.0 | 570 | 2.3487 | 33.2554 | 19.3441 | 30.9036 | 30.8525 |
| 3.3018 | 4.0 | 760 | 2.2711 | 33.0105 | 19.01 | 30.6851 | 30.5767 |
| 2.7431 | 5.0 | 950 | 2.2254 | 33.1301 | 18.9618 | 30.6744 | 30.6284 |
| 2.7431 | 6.0 | 1140 | 2.1847 | 33.3701 | 19.1884 | 30.9138 | 30.8611 |
| 2.7431 | 7.0 | 1330 | 2.1443 | 33.3158 | 19.101 | 30.8317 | 30.7747 |
| 2.5154 | 8.0 | 1520 | 2.1072 | 33.1638 | 19.0139 | 30.7295 | 30.7162 |
| 2.5154 | 9.0 | 1710 | 2.0989 | 33.4925 | 19.2107 | 31.0253 | 30.9908 |
| 2.3763 | 10.0 | 1900 | 2.0709 | 33.3007 | 18.9519 | 30.847 | 30.8018 |
| 2.3763 | 11.0 | 2090 | 2.0631 | 33.4689 | 19.1995 | 31.0712 | 31.0327 |
| 2.3763 | 12.0 | 2280 | 2.0418 | 33.2536 | 19.027 | 30.898 | 30.8695 |
| 2.2811 | 13.0 | 2470 | 2.0345 | 33.5097 | 19.2219 | 31.1057 | 31.0683 |
| 2.2811 | 14.0 | 2660 | 2.0185 | 33.3544 | 19.1241 | 30.913 | 30.8873 |
| 2.2173 | 15.0 | 2850 | 2.0138 | 33.3856 | 19.2065 | 31.0173 | 30.9447 |
| 2.2173 | 16.0 | 3040 | 2.0019 | 33.4035 | 19.1803 | 31.0154 | 30.981 |
| 2.2173 | 17.0 | 3230 | 1.9977 | 33.4059 | 19.3078 | 31.1196 | 31.0692 |
| 2.1612 | 18.0 | 3420 | 1.9883 | 33.5097 | 19.3637 | 31.0966 | 31.0554 |
| 2.1612 | 19.0 | 3610 | 1.9828 | 33.4965 | 19.2754 | 31.1267 | 31.1021 |
| 2.1115 | 20.0 | 3800 | 1.9834 | 33.7514 | 19.5325 | 31.2833 | 31.2418 |
| 2.1115 | 21.0 | 3990 | 1.9754 | 33.6193 | 19.429 | 31.2721 | 31.2267 |
| 2.1115 | 22.0 | 4180 | 1.9716 | 33.5212 | 19.3637 | 31.1326 | 31.1162 |
| 2.0824 | 23.0 | 4370 | 1.9667 | 33.5156 | 19.3223 | 31.1023 | 31.0709 |
| 2.0824 | 24.0 | 4560 | 1.9735 | 33.6089 | 19.3842 | 31.1539 | 31.1419 |
| 2.0657 | 25.0 | 4750 | 1.9674 | 33.6317 | 19.4044 | 31.2361 | 31.2222 |
| 2.0657 | 26.0 | 4940 | 1.9617 | 33.745 | 19.5099 | 31.3061 | 31.2643 |
| 2.0657 | 27.0 | 5130 | 1.9613 | 33.7798 | 19.5496 | 31.3761 | 31.3356 |
| 2.0511 | 28.0 | 5320 | 1.9635 | 33.8568 | 19.594 | 31.4454 | 31.4141 |
| 2.0511 | 29.0 | 5510 | 1.9609 | 33.805 | 19.5962 | 31.393 | 31.3493 |
| 2.0377 | 30.0 | 5700 | 1.9610 | 33.7922 | 19.5758 | 31.3737 | 31.3428 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.8.0
- Tokenizers 0.12.1
|
rvargas93/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
|
rvargas93
| 2023-01-29T19:17:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"es",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T19:17:48Z |
---
language: es
thumbnail: https://i.imgur.com/jgBdimh.png
license: apache-2.0
duplicated_from: mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es
---
# BETO (Spanish BERT) + Spanish SQuAD2.0 + distillation using 'bert-base-multilingual-cased' as teacher
This model is a fine-tuned on [SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve) and **distilled** version of [BETO](https://github.com/dccuchile/beto) for **Q&A**.
Distillation makes the model **smaller, faster, cheaper and lighter** than [bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://github.com/huggingface/transformers/blob/master/model_cards/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es/README.md)
This model was fine-tuned on the same dataset but using **distillation** during the process as mentioned above (and one more train epoch).
The **teacher model** for the distillation was `bert-base-multilingual-cased`. It is the same teacher used for `distilbert-base-multilingual-cased` AKA [**DistilmBERT**](https://github.com/huggingface/transformers/tree/master/examples/distillation) (on average is twice as fast as **mBERT-base**).
## Details of the downstream task (Q&A) - Dataset
<details>
[SQuAD-es-v2.0](https://github.com/ccasimiro88/TranslateAlignRetrieve)
| Dataset | # Q&A |
| ----------------------- | ----- |
| SQuAD2.0 Train | 130 K |
| SQuAD2.0-es-v2.0 | 111 K |
| SQuAD2.0 Dev | 12 K |
| SQuAD-es-v2.0-small Dev | 69 K |
</details>
## Model training
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
!export SQUAD_DIR=/path/to/squad-v2_spanish \
&& python transformers/examples/distillation/run_squad_w_distillation.py \
--model_type bert \
--model_name_or_path dccuchile/bert-base-spanish-wwm-cased \
--teacher_type bert \
--teacher_name_or_path bert-base-multilingual-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.json \
--predict_file $SQUAD_DIR/dev-v2.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 5.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--save_steps 5000 \
--threads 4 \
--version_2_with_negative
```
## Results:
TBA
### Model in action
Fast usage with **pipelines**:
```python
from transformers import *
# Important!: By now the QA pipeline is not compatible with fast tokenizer, but they are working on it. So that pass the object to the tokenizer {"use_fast": False} as in the following example:
nlp = pipeline(
'question-answering',
model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer=(
'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
{"use_fast": False}
)
)
nlp(
{
'question': '¿Para qué lenguaje está trabajando?',
'context': 'Manuel Romero está colaborando activamente con huggingface/transformers ' +
'para traer el poder de las últimas técnicas de procesamiento de lenguaje natural al idioma español'
}
)
# Output: {'answer': 'español', 'end': 169, 'score': 0.67530957344621, 'start': 163}
```
Play with this model and ```pipelines``` in a Colab:
<a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Using_Spanish_BERT_fine_tuned_for_Q%26A_pipelines.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
<details>
1. Set the context and ask some questions:

2. Run predictions:

</details>
More about ``` Huggingface pipelines```? check this Colab out:
<a href="https://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Huggingface_pipelines_demo.ipynb" target="_parent"><img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667" alt="Open In Colab" data-canonical-src="https://colab.research.google.com/assets/colab-badge.svg"></a>
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
erkam/sd-clevr-scene-graph
|
erkam
| 2023-01-29T19:12:58Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-27T20:17:49Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/erkam/sd-clevr-scene-graph
These are LoRA adaption weights for https://huggingface.co/erkam/sd-clevr-scene-graph. The weights were fine-tuned on the erkam/clevr-with-depth dataset. You can find some example images in the following.




|
huggingtweets/mobytism
|
huggingtweets
| 2023-01-29T19:03:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-01T11:30:24Z |
---
language: en
thumbnail: http://www.huggingtweets.com/mobytism/1675018962032/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1617195988317360129/c_KkReqH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lydia</div>
<div style="text-align: center; font-size: 14px;">@mobytism</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lydia.
| Data | lydia |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 106 |
| Short tweets | 619 |
| Tweets kept | 2510 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hxkf62u5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mobytism's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8apnmb37) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8apnmb37/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mobytism')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
owaiskha9654/Yolov7_Custom_Object_Detection
|
owaiskha9654
| 2023-01-29T19:02:03Z | 0 | 2 | null |
[
"V20",
"region:us"
] | null | 2022-08-13T23:50:29Z |
---
tags:
- V20
metrics:
- mAP_0.5:0.95
- mAP_0.5
---
# Custom Training with YOLOv7 🔥
## Some Important links
- [Model Inference🤖](https://huggingface.co/spaces/owaiskha9654/Custom_Yolov7)
- [**🚀Training Yolov7 on Kaggle**](https://www.kaggle.com/code/owaiskhan9654/training-yolov7-on-kaggle-on-custom-dataset)
- [Weight and Biases 🐝](https://wandb.ai/owaiskhan9515/YOLOR)
- [HuggingFace 🤗 Model Repo](https://huggingface.co/owaiskha9654/Yolov7_Custom_Object_Detection)
## Contact Information
- **Name** - Owais Ahmad
- **Phone** - +91-9515884381
- **Email** - [email protected]
- **Portfolio** - https://owaiskhan9654.github.io/
# Objective
## To Showcase custom Object Detection on the Given Dataset to train and Infer the Model using newly launched YoloV7.
# Data Acquisition
The goal of this task is to train a model that
can localize and classify each instance of **Person** and **Car** as accurately as possible.
- [Link to the Downloadable Dataset](https://www.kaggle.com/datasets/owaiskhan9654/car-person-v2-roboflow)
```python
from IPython.display import Markdown, display
display(Markdown("../input/Car-Person-v2-Roboflow/README.roboflow.txt"))
```
# Custom Training with YOLOv7 🔥
In this Notebook, I have processed the images with RoboFlow because in COCO formatted dataset was having different dimensions of image and Also data set was not splitted into different Format.
To train a custom YOLOv7 model we need to recognize the objects in the dataset. To do so I have taken the following steps:
* Export the dataset to YOLOv7
* Train YOLOv7 to recognize the objects in our dataset
* Evaluate our YOLOv7 model's performance
* Run test inference to view performance of YOLOv7 model at work
# 📦 [YOLOv7](https://github.com/WongKinYiu/yolov7)
<div align=left><img src="https://raw.githubusercontent.com/WongKinYiu/yolov7/main/figure/performance.png" width=800>
**Image Credit** - [WongKinYiu](https://github.com/WongKinYiu/yolov7)
</div>
# Step 1: Install Requirements
```python
!git clone https://github.com/WongKinYiu/yolov7 # Downloading YOLOv7 repository and installing requirements
%cd yolov7
!pip install -qr requirements.txt
!pip install -q roboflow
```
# **Downloading YOLOV7 starting checkpoint**
```python
!wget "https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt"
```
```python
import os
import glob
import wandb
import torch
from roboflow import Roboflow
from kaggle_secrets import UserSecretsClient
from IPython.display import Image, clear_output, display # to display images
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
```
<img src="https://camo.githubusercontent.com/dd842f7b0be57140e68b2ab9cb007992acd131c48284eaf6b1aca758bfea358b/68747470733a2f2f692e696d6775722e636f6d2f52557469567a482e706e67">
> I will be integrating W&B for visualizations and logging artifacts and comparisons of different models!
>
> [YOLOv7-Car-Person-Custom](https://wandb.ai/owaiskhan9515/YOLOR)
```python
try:
user_secrets = UserSecretsClient()
wandb_api_key = user_secrets.get_secret("wandb_api")
wandb.login(key=wandb_api_key)
anonymous = None
except:
wandb.login(anonymous='must')
print('To use your W&B account,\nGo to Add-ons -> Secrets and provide your W&B access token. Use the Label name as WANDB. \nGet your W&B access token from here: https://wandb.ai/authorize')
wandb.init(project="YOLOv7",name=f"7. YOLOv7-Car-Person-Custom-Run-7")
```
# Step 2: Assemble Our Dataset

In order to train our custom model, we need to assemble a dataset of representative images with bounding box annotations around the objects that we want to detect. And we need our dataset to be in YOLOv7 format.
In Roboflow, We can choose between two paths:
* Convert an existing Coco dataset to YOLOv7 format. In Roboflow it supports over [30 formats object detection formats](https://roboflow.com/formats) for conversion.
* Uploading only these raw images and annotate them in Roboflow with [Roboflow Annotate](https://docs.roboflow.com/annotate).
# Version v7 Jan 30, 2023 Looks like this.

### Since paid credits are required to train the model on RoboFlow I have used Kaggle Free resources to train it here
### Note you can import any other data from other sources. Just remember to keep in the Yolov7 Pytorch form accept

```python
user_secrets = UserSecretsClient()
roboflow_api_key = user_secrets.get_secret("roboflow_api")
```
```python
rf = Roboflow(api_key=roboflow_api_key)
project = rf.workspace("owais-ahmad").project("custom-yolov7-on-kaggle-on-custom-dataset-rakiq")
dataset = project.version(2).download("yolov7")
```
# Step 3: Training Custom pretrained YOLOv7 model
Here, I am able to pass a number of arguments:
- **img:** define input image size
- **batch:** determine batch size
- **epochs:** define the number of training epochs. (Note: often, 3000+ are common here nut since I am using free version of colab I will be only defining it to 20!)
- **data:** Our dataset locaiton is saved in the `./yolov7/Custom-Yolov7-on-Kaggle-on-Custom-Dataset-2` folder.
- **weights:** specifying a path to weights to start transfer learning from. Here I have choosen a generic COCO pretrained checkpoint.
- **cache:** caching images for faster training
```python
!python train.py --batch 16 --cfg cfg/training/yolov7.yaml --epochs 30 --data {dataset.location}/data.yaml --weights 'yolov7.pt' --device 0
```
# Run Inference With Trained Weights
Testing inference with a pretrained checkpoint on contents of `./Custom-Yolov7-on-Kaggle-on-Custom-Dataset-2/test/images` folder downloaded from Roboflow.
```python
!python detect.py --weights runs/train/exp/weights/best.pt --img 416 --conf 0.75 --source ./Custom-Yolov7-on-Kaggle-on-Custom-Dataset-2/test/images
```
# Display inference on ALL test images
```python
for images in glob.glob('runs/detect/exp/*.jpg')[0:10]:
display(Image(filename=images))
```
```python
model = torch.load('runs/train/exp/weights/best.pt')
```
# Conclusion and Next Steps
Now this trained custom YOLOv7 model can be used to recognize **Person** and **Cars** form any given Images.
To improve the model's performance, I might perform more interating on the datasets coverage,propper annotations and and Image quality. From orignal authors of **Yolov7** this guide has been given for [model performance improvement](https://github.com/WongKinYiu/yolov7).
To deploy our model to an application by [exporting your model to deployment destinations](https://github.com/WongKinYiu/yolov7/issues).
Once our model is in production, I will be willing to continually iterate and improve on your dataset and model via [active learning](https://blog.roboflow.com/what-is-active-learning/).
|
eldraco/q-FrozenLake-v1-8x8-NoSlippery
|
eldraco
| 2023-01-29T18:52:16Z | 0 | 0 | null |
[
"FrozenLake8x8-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T18:52:12Z |
---
tags:
- FrozenLake8x8-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-NoSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake8x8-v1-8x8-no_slippery
type: FrozenLake8x8-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake8x8-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake8x8-v1** .
## Usage
```python
model = load_from_hub(repo_id="eldraco/q-FrozenLake-v1-8x8-NoSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/stuffycocoa7thheaven_10
|
LarryAIDraw
| 2023-01-29T18:25:25Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-28T16:58:13Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5474/stuffycocoa7thheavenmix
|
antoooooine/dqn-SpaceInvadersNoFrameskip-v4
|
antoooooine
| 2023-01-29T18:21:04Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T18:20:27Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 529.50 +/- 167.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga antoooooine -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga antoooooine -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga antoooooine
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BeardedJohn/bert-finetuned-ner-per-v3
|
BeardedJohn
| 2023-01-29T18:20:57Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-27T16:32:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-ner-per-v3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-per-v3
This model is a fine-tuned version of [BeardedJohn/bert-finetuned-ner-per-v3](https://huggingface.co/BeardedJohn/bert-finetuned-ner-per-v3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1787
- Validation Loss: 0.3198
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1875, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3935 | 0.3222 | 0 |
| 0.2585 | 0.3025 | 1 |
| 0.1787 | 0.3198 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Short-Answer-Feedback/mbart-score-finetuned-saf-legal-domain
|
Short-Answer-Feedback
| 2023-01-29T18:02:32Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"de",
"dataset:Short-Answer-Feedback/saf_legal_domain_german",
"arxiv:2001.08210",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-29T14:32:10Z |
---
language: de
datasets:
- Short-Answer-Feedback/saf_legal_domain_german
tags:
- generated_from_trainer
widget:
- text: "Antwort: Wird sich nicht an die Auflagen gehalten (unzureichende Eigenbemühung), droht eine Sperrzeit von 1-2 Wochen. Dadurch wird für die genannte zeit keine Leistung gezahlt, die Anspruchsdauer vermindert sich insgesamt. Bei wichtigen Gründen wird die Sperrzeit nicht verordnet. Lösung: Merkblatt 1 für Arbeitslose, S. 22: Erbringen Sie die Pflichten im Zusammenhang mit den Eigenbemühungen nicht, nicht rechtzeitig oder nicht vollständig, tritt eine Sperrzeit (0,75 p) ein. Merkblatt 1 für Arbeitslose, S. 55: Die Dauer einer Sperrzeit bei unzureichenden Eigenbemühungen beträgt zwei Wochen. (0,25 p). Frage: Mit welcher Folge und welcher Dauer müssen Sie rechnen, wenn Sie Ihre notwendigen Eigenbemühungen nicht rechtzeitig oder nicht vollständig erfüllen?"
---
# mbart-score-finetuned-saf-legal-domain
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german) dataset for Short Answer Feedback (SAF).
## Model description
This model was built on top of [mBART](https://arxiv.org/abs/2001.08210), which is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages.
It expects inputs in the following format:
```
Antwort: [answer] Lösung: [reference_answer] Frage: [question]
```
In the example above, `[answer]`, `[reference_answer]` and `[question]` should be replaced by the provided answer, the reference answer and the question to which they refer, respectively.
The outputs are formatted as follows:
```
[score] Feedback: [feedback]
```
Hence, `[score]` will be a numeric value between 0 and 1, while `[feedback]` will be the textual feedback generated by the model according to the given answer.
## Intended uses & limitations
This model is intended to be used for Short Answer Feedback generation in the domain of the German social law. Thus, it is not expected to have particularly good performance on sets of questions and answers out of this scope.
It is important to acknowledge that the model underperforms when a question that was not seen during training is given as input for inference. In particular, it tends to classify most answers as being correct and does not provide relevant feedback in such cases. Nevertheless, this limitation could be partially overcome by extending the dataset with the desired question (and associated answers) and fine-tuning it for a few epochs on the new data.
## Training and evaluation data
As mentioned previously, the model was trained on the [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german) dataset, which is divided into the following splits.
| Split | Number of examples |
| --------------------- | ------------------ |
| train | 1596 |
| validation | 400 |
| test_unseen_answers | 221 |
| test_unseen_questions | 275 |
Evaluation was performed on the `test_unseen_answers` and `test_unseen_questions` splits.
## Training procedure
The [Trainer API](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainer) was used to fine-tune the model. The code utilized for pre-processing and training was mostly adapted from the [summarization script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) made available by HuggingFace.
Training was completed in a little over 1 hour on a GPU on Google Colab.
### Training hyperparameters
The following hyperparameters were used during training:
- num_epochs: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- learning_rate: 6e-05
- lr_scheduler_type: linear
- train_batch_size: 1
- gradient_accumulation_steps: 4
- eval_batch_size: 4
- mixed_precision_training: Native AMP
- seed: 42
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
## Evaluation results
The generated feedback was evaluated through means of the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu), [ROUGE-2](https://huggingface.co/spaces/evaluate-metric/rouge), [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor), [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore) metrics from HuggingFace, while the [Root Mean Squared Error](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error) loss from scikit-learn was used for evaluation of the predicted scores in relation to the golden label scores.
The following results were achieved.
| Split | SacreBLEU | ROUGE-2 | METEOR | BERTScore | RMSE |
| --------------------- | :-------: | :-----: | :----: | :-------: | :---: |
| test_unseen_answers | 39.4 | 42.3 | 54.3 | 52.6 | 0.190 |
| test_unseen_questions | 2.8 | 5.0 | 17.9 | 10.7 | 0.317 |
The script used to compute these metrics and perform evaluation can be found in the `evaluation.py` file in this repository.
## Usage
The example below shows how the model can be applied to generate feedback to a given answer.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('Short-Answer-Feedback/mbart-score-finetuned-saf-legal-domain')
tokenizer = AutoTokenizer.from_pretrained('Short-Answer-Feedback/mbart-score-finetuned-saf-legal-domain')
example_input = 'Antwort: Wird sich nicht an die Auflagen gehalten (unzureichende Eigenbemühung), droht eine Sperrzeit von 1-2 Wochen. Dadurch wird für die genannte zeit keine Leistung gezahlt, die Anspruchsdauer vermindert sich insgesamt. Bei wichtigen Gründen wird die Sperrzeit nicht verordnet. Lösung: Merkblatt 1 für Arbeitslose, S. 22: Erbringen Sie die Pflichten im Zusammenhang mit den Eigenbemühungen nicht, nicht rechtzeitig oder nicht vollständig, tritt eine Sperrzeit (0,75 p) ein. Merkblatt 1 für Arbeitslose, S. 55: Die Dauer einer Sperrzeit bei unzureichenden Eigenbemühungen beträgt zwei Wochen. (0,25 p). Frage: Mit welcher Folge und welcher Dauer müssen Sie rechnen, wenn Sie Ihre notwendigen Eigenbemühungen nicht rechtzeitig oder nicht vollständig erfüllen?'
inputs = tokenizer(example_input, max_length=256, padding='max_length', truncation=True, return_tensors='pt')
generated_tokens = model.generate(
inputs['input_ids'],
attention_mask=inputs['attention_mask'],
max_length=128
)
output = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
```
The output produced by the model then looks as follows:
```
0.75 Feedback: Es ist richtig, dass Sie mit einer Sperrzeit rechnen müssen, in der Sie keine Leistung bekommen. Die gesetzlich vorgesehene Sperrzeit bei unzureichenden Eigenbemühungen beträgt jedoch zwei Wochen.
```
|
eldraco/q-FrozenLake-v1-4x4-Slippery
|
eldraco
| 2023-01-29T17:57:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T17:53:24Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.73 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="eldraco/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lora-library/tekakutli-dinosaurs
|
lora-library
| 2023-01-29T17:20:52Z | 0 | 2 | null |
[
"stable-diffusion",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-01-29T16:58:07Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-1-5
tags:
- stable-diffusion
- lora
inference: true
---
tested with: 1.5 and dreamlikeDiffusion
share with me your best ones so I can train it further: twitter.com/tekakutli
|
heziyevv/aze-bert-tokenizer-middle
|
heziyevv
| 2023-01-29T17:06:39Z | 0 | 0 | null |
[
"wikipedia",
"books",
"social-media",
"az",
"license:mit",
"region:us"
] | null | 2023-01-29T16:53:39Z |
---
license: mit
language:
- az
tags:
- wikipedia
- books
- social-media
vocab-size: 16378
---
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Farid Haziyev
- **Model type:** Tokenizer
- **Language(s) (NLP):** Azerbaijani
- **License:** MIT
- **Finetuned from model [optional]:** bert-based-uncased
# Uses
Can be used in any project intended for the purpose of improving Azerbaijani language models
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("heziyevv/aze-bert-tokenizer-middle")
```
|
zendiode69/electra-base-squad2-finetuned-squad-12-trainedfor-3
|
zendiode69
| 2023-01-29T16:58:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-28T13:55:50Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: electra-base-squad2-finetuned-squad-12-trainedfor-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-finetuned-squad-12-trainedfor-3
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6128 | 1.0 | 578 | 0.3142 |
| 0.4583 | 2.0 | 1156 | 0.3072 |
| 0.415 | 3.0 | 1734 | 0.3064 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eingrid/ppo-LunarLander-v22
|
eingrid
| 2023-01-29T16:57:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T16:56:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PRO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.46 +/- 19.81
name: mean_reward
verified: false
---
# **PRO** Agent playing **LunarLander-v2**
This is a trained model of a **PRO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Malisha/layoutlm-funsd-tf
|
Malisha
| 2023-01-29T16:47:48Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-29T14:51:31Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2451
- Validation Loss: 0.7339
- Train Overall Precision: 0.7247
- Train Overall Recall: 0.8058
- Train Overall F1: 0.7631
- Train Overall Accuracy: 0.7976
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.6758 | 1.4035 | 0.2734 | 0.3191 | 0.2945 | 0.5113 | 0 |
| 1.1350 | 0.8802 | 0.5626 | 0.6538 | 0.6048 | 0.7313 | 1 |
| 0.7417 | 0.6927 | 0.6604 | 0.7602 | 0.7068 | 0.7805 | 2 |
| 0.5568 | 0.6715 | 0.7039 | 0.7501 | 0.7263 | 0.7823 | 3 |
| 0.4493 | 0.6464 | 0.7073 | 0.7782 | 0.7410 | 0.7980 | 4 |
| 0.3732 | 0.6112 | 0.7108 | 0.7858 | 0.7464 | 0.8182 | 5 |
| 0.2949 | 0.6429 | 0.7123 | 0.7988 | 0.7531 | 0.8070 | 6 |
| 0.2451 | 0.7339 | 0.7247 | 0.8058 | 0.7631 | 0.7976 | 7 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eldraco/ppo-Huggy
|
eldraco
| 2023-01-29T16:37:57Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-29T16:37:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: eldraco/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bnowak1831/ppo-LunarLander-v2
|
bnowak1831
| 2023-01-29T16:19:32Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T15:53:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.21 +/- 23.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_mnli
|
gokuls
| 2023-01-29T16:13:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T09:53:46Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3295362082994304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_mnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7834
- Accuracy: 0.3295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8866 | 1.0 | 3068 | 1.7941 | 0.3274 |
| 1.8864 | 2.0 | 6136 | 1.7939 | 0.3274 |
| 1.8864 | 3.0 | 9204 | 1.7944 | 0.3274 |
| 1.8864 | 4.0 | 12272 | 1.7940 | 0.3274 |
| 1.8864 | 5.0 | 15340 | 1.7938 | 0.3274 |
| 1.8864 | 6.0 | 18408 | 1.7940 | 0.3274 |
| 1.8864 | 7.0 | 21476 | 1.7944 | 0.3274 |
| 1.8864 | 8.0 | 24544 | 1.7939 | 0.3274 |
| 1.8864 | 9.0 | 27612 | 1.7939 | 0.3274 |
| 1.8864 | 10.0 | 30680 | 1.7940 | 0.3274 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jggrandio/ppo-LunarLander-v2
|
jggrandio
| 2023-01-29T15:56:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T15:56:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.72 +/- 21.70
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jondister/JD_Pyramids
|
jondister
| 2023-01-29T15:40:50Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-29T15:40:44Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: jondister/JD_Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_mnli_256
|
gokuls
| 2023-01-29T15:06:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T08:46:21Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_mnli_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.3295362082994304
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_mnli_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7834
- Accuracy: 0.3295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8865 | 1.0 | 3068 | 1.7940 | 0.3274 |
| 1.8864 | 2.0 | 6136 | 1.7940 | 0.3274 |
| 1.8864 | 3.0 | 9204 | 1.7944 | 0.3274 |
| 1.8864 | 4.0 | 12272 | 1.7940 | 0.3274 |
| 1.8864 | 5.0 | 15340 | 1.7938 | 0.3274 |
| 1.8864 | 6.0 | 18408 | 1.7940 | 0.3274 |
| 1.8864 | 7.0 | 21476 | 1.7944 | 0.3274 |
| 1.8864 | 8.0 | 24544 | 1.7939 | 0.3274 |
| 1.8864 | 9.0 | 27612 | 1.7939 | 0.3274 |
| 1.8863 | 10.0 | 30680 | 1.7940 | 0.3274 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BachNgoH/ppo-Huggy
|
BachNgoH
| 2023-01-29T15:00:24Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-01-29T15:00:16Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: BachNgoH/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vicfeuga/CartPole-v1
|
vicfeuga
| 2023-01-29T14:46:15Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T14:46:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jianleo/lora_ruhua_sd_1k
|
jianleo
| 2023-01-29T14:42:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-01-29T14:31:50Z |
---
license: creativeml-openrail-m
base_model: /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jianleo/lora_ruhua_sd_1k
These are LoRA adaption weights for /root/autodl-tmp/sd_weights/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf. The weights were trained on a photo of rha woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
Porridge9243/a2c-AntBulletEnv-v0
|
Porridge9243
| 2023-01-29T14:26:35Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T14:25:33Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1721.92 +/- 403.54
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
brouthen/q-Taxi-v3
|
brouthen
| 2023-01-29T14:18:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T14:18:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="brouthen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Musha-the-Yusha/a2c-AntBulletEnv-v0
|
Musha-the-Yusha
| 2023-01-29T14:12:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T13:42:12Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1107.79 +/- 78.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
santiviquez/noisy_human_cnn
|
santiviquez
| 2023-01-29T13:38:42Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-01-29T13:16:13Z |
---
license: mit
metrics:
- accuracy
---
# Model Card for noisy_human_cnn
<!-- Provide a quick summary of what the model is/does. -->
CNN with 2 input channels (Melspectrograms and deltas) of 5-second audio signals.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Santiago Viquez, Ivan Padezhki
- **Model type:** CNN for audio classification
- **License:** MIT
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/santiviquez/noisy-human-recognition/
- **Demo [optional]:** [More Information Needed]
|
AlekseyCalvin/asoon-dreambooth-sd-model
|
AlekseyCalvin
| 2023-01-29T12:39:28Z | 17 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"doi:10.57967/hf/0193",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-08T16:31:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: asoon
---
### Asoon Dreambooth SD Model Dreambooth model trained by AlekseyCalvin with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-768 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
To generate custom images of my primary public self – one known as A.C.T. SOON® – use "asoon" or "asoon person" in your Stable Diffusion prompt (implemented via this model only).
Checkpoints herein trained based on SD 2.1.
[asoon:](https://huggingface.co/AlekseyCalvin/asoon-dreambooth-sd-model/resolve/main/concept_images/asoon_%2812%29.jpg)!
|
research-backup/mbart-large-cc25-koquad-qg-ae
|
research-backup
| 2023-01-29T12:38:47Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question generation",
"answer extraction",
"ko",
"dataset:lmqg/qg_koquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-29T12:24:20Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ko
datasets:
- lmqg/qg_koquad
pipeline_tag: text2text-generation
tags:
- question generation
- answer extraction
widget:
- text: "generate question: 1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다."
example_title: "Question Generation Example 1"
- text: "generate question: 백신이 없기때문에 예방책은 <hl> 살충제 <hl> 를 사용하면서 서식 장소(찻찬 받침, 배수로, 고인 물의 열린 저장소, 버려진 타이어 등)의 수를 줄임으로써 매개체를 통제할 수 있다."
example_title: "Question Generation Example 2"
- text: "generate question: <hl> 원테이크 촬영 <hl> 이기 때문에 한 사람이 실수를 하면 처음부터 다시 찍어야 하는 상황이 발생한다."
example_title: "Question Generation Example 3"
- text: "extract answers: 또한 스피어스는 많은 새로운 여성 아티스트들에게 영향을 끼쳤는데, 대표적으로 데미 로바토, 케이티 페리, 크리스티니아 드바지, 레이디 가가, 리틀 부츠, 셀레나 고메즈 & 더씬, 픽시 로트 이 있다. 2007년 비욘세 놀스는 Total Request Live와의 인터뷰에서 '나는 브리트니를 사랑하고 팬이에요. 특히 새 앨범 Blackout을 좋아해요'라고 말했다. 린제이 로한은 '언제나 브리트니 스피어스에게 영감을 받는다. 학창시절 그녀처럼 타블로이드에 오르기를 꿈꿔왔다'고 말하며 롤 모델로 꼽았다. 스피어스는 현대 음악가들에게 음악적 영감으로 언급되기도 했다. <hl> 마일리 사이러스는 자신의 히트곡 Party in the U.S.A. 가 브리트니에게 영감과 영향을 받은 곡이라고 밝혔다. <hl> 베리 매닐로우의 앨범 15 Minutes 역시 브리트니에게 영감을 얻었다고 언급되었다."
example_title: "Answer Extraction Example 1"
- text: "extract answers: 지난 22일 아프리카TV는 BJ 철구가 서비스 정지 처분을 받았음을 밝혔다. 서비스 정지 처분을 사유는 철구가 10대 청소년에게 유해한 장면을 방송으로 내보냈기 때문이었다. 문제가 된 장면은 BJ 철구가 미성년자는 시청할 수 없게 하는 19세 시청 가능 설정을 하지 않은 채 흡연하는 모습을 여과 없이 드러낸 장면이다. 아프리카TV는 청소년 보호 정책의 '청소년들이 해로운 환경으로부터 보호받을 수 있도록 조치한다'라고 조항을 근거로 철구에게 서비스 정지 처분을 내렸다. 흡연 이외에 음주 방송 등도 19세 시청 가능 설정을 해야만 방송할 수 있다. <hl> 게다가 철구의 방송 정지 처분은 이번에 처음이 아니라 16번 째기 때문에 더욱더 논란이 되고 있다. <hl>"
example_title: "Answer Extraction Example 2"
model-index:
- name: lmqg/mbart-large-cc25-koquad-qg-ae
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_koquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 10.7
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 27.02
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 29.73
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 83.52
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 82.79
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer
value: 80.81
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer
value: 84.32
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer
value: 77.64
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer
value: 83.42
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer
value: 88.44
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer
value: 79.08
- name: BLEU4 (Answer Extraction)
type: bleu4_answer_extraction
value: 24.34
- name: ROUGE-L (Answer Extraction)
type: rouge_l_answer_extraction
value: 82.78
- name: METEOR (Answer Extraction)
type: meteor_answer_extraction
value: 59.82
- name: BERTScore (Answer Extraction)
type: bertscore_answer_extraction
value: 95.53
- name: MoverScore (Answer Extraction)
type: moverscore_answer_extraction
value: 94.69
- name: AnswerF1Score (Answer Extraction)
type: answer_f1_score__answer_extraction
value: 88.2
- name: AnswerExactMatch (Answer Extraction)
type: answer_exact_match_answer_extraction
value: 82.17
---
# Model Card of `lmqg/mbart-large-cc25-koquad-qg-ae`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation and answer extraction jointly on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** ko
- **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ko", model="lmqg/mbart-large-cc25-koquad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-koquad-qg-ae")
# answer extraction
answer = pipe("generate question: 1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
# question generation
question = pipe("extract answers: 또한 스피어스는 많은 새로운 여성 아티스트들에게 영향을 끼쳤는데, 대표적으로 데미 로바토, 케이티 페리, 크리스티니아 드바지, 레이디 가가, 리틀 부츠, 셀레나 고메즈 & 더씬, 픽시 로트 이 있다. 2007년 비욘세 놀스는 Total Request Live와의 인터뷰에서 '나는 브리트니를 사랑하고 팬이에요. 특히 새 앨범 Blackout을 좋아해요'라고 말했다. 린제이 로한은 '언제나 브리트니 스피어스에게 영감을 받는다. 학창시절 그녀처럼 타블로이드에 오르기를 꿈꿔왔다'고 말하며 롤 모델로 꼽았다. 스피어스는 현대 음악가들에게 음악적 영감으로 언급되기도 했다. <hl> 마일리 사이러스는 자신의 히트곡 Party in the U.S.A. 가 브리트니에게 영감과 영향을 받은 곡이라고 밝혔다. <hl> 베리 매닐로우의 앨범 15 Minutes 역시 브리트니에게 영감을 얻었다고 언급되었다.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 83.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 26.03 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 18.93 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 14.14 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 10.7 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 29.73 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 82.79 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 27.02 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 80.81 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedF1Score (MoverScore) | 83.42 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedPrecision (BERTScore) | 77.64 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedPrecision (MoverScore) | 79.08 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedRecall (BERTScore) | 84.32 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| QAAlignedRecall (MoverScore) | 88.44 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_koquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 82.17 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| AnswerF1Score | 88.2 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| BERTScore | 95.53 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_1 | 68.81 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_2 | 56.84 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_3 | 40.49 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| Bleu_4 | 24.34 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| METEOR | 59.82 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| MoverScore | 94.69 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
| ROUGE_L | 82.78 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_koquad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 2
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 32
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-koquad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05-finetuned-SFEW-7e-05
|
lixiqi
| 2023-01-29T12:24:46Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-29T12:04:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05-finetuned-SFEW-7e-05
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.49596309111880044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05-finetuned-SFEW-7e-05
This model is a fine-tuned version of [Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05](https://huggingface.co/Celal11/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013CKPlus-7e-05) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5629
- Accuracy: 0.4960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1509 | 0.97 | 14 | 1.6920 | 0.3725 |
| 1.6764 | 1.97 | 28 | 1.5035 | 0.4694 |
| 1.2723 | 2.97 | 42 | 1.5061 | 0.4694 |
| 1.1746 | 3.97 | 56 | 1.5421 | 0.4729 |
| 0.9954 | 4.97 | 70 | 1.5657 | 0.4787 |
| 1.0029 | 5.97 | 84 | 1.5867 | 0.4844 |
| 0.9139 | 6.97 | 98 | 1.5943 | 0.4879 |
| 0.8335 | 7.97 | 112 | 1.6003 | 0.4890 |
| 0.8382 | 8.97 | 126 | 1.5629 | 0.4960 |
| 0.7169 | 9.97 | 140 | 1.5772 | 0.4856 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
nandysoham/19-clustered
|
nandysoham
| 2023-01-29T12:18:56Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T12:16:48Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/19-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/19-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7685
- Train End Logits Accuracy: 0.7826
- Train Start Logits Accuracy: 0.75
- Validation Loss: 0.9786
- Validation End Logits Accuracy: 0.6912
- Validation Start Logits Accuracy: 0.6838
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 134, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0803 | 0.6931 | 0.6922 | 0.9561 | 0.6838 | 0.6875 | 0 |
| 0.7685 | 0.7826 | 0.75 | 0.9786 | 0.6912 | 0.6838 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
chang26/distilbert-base-uncased-finetuned-emotion
|
chang26
| 2023-01-29T12:18:08Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T12:01:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9256588984500898
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2036
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.788 | 1.0 | 250 | 0.2847 | 0.9135 | 0.9117 |
| 0.2345 | 2.0 | 500 | 0.2036 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Porridge9243/PPO-Pyramids
|
Porridge9243
| 2023-01-29T12:16:40Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-29T12:16:35Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Porridge9243/PPO-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RajMoodley/a2c-PandaReachDense-v2z
|
RajMoodley
| 2023-01-29T12:13:54Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T12:11:35Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.19 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nandysoham/15-clustered
|
nandysoham
| 2023-01-29T11:52:46Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T11:44:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/15-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/15-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6484
- Train End Logits Accuracy: 0.8084
- Train Start Logits Accuracy: 0.7994
- Validation Loss: 0.9490
- Validation End Logits Accuracy: 0.7555
- Validation Start Logits Accuracy: 0.7246
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 612, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.9647 | 0.7261 | 0.7081 | 0.9607 | 0.7482 | 0.7165 | 0 |
| 0.6484 | 0.8084 | 0.7994 | 0.9490 | 0.7555 | 0.7246 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham/14-clustered
|
nandysoham
| 2023-01-29T11:43:14Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T11:40:59Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/14-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/14-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6609
- Train End Logits Accuracy: 0.8090
- Train Start Logits Accuracy: 0.7691
- Validation Loss: 0.8873
- Validation End Logits Accuracy: 0.7612
- Validation Start Logits Accuracy: 0.6955
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 144, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0023 | 0.7214 | 0.6780 | 0.8817 | 0.7612 | 0.6817 | 0 |
| 0.6609 | 0.8090 | 0.7691 | 0.8873 | 0.7612 | 0.6955 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham/12-clustered
|
nandysoham
| 2023-01-29T11:32:26Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T11:24:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/12-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/12-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6856
- Train End Logits Accuracy: 0.8145
- Train Start Logits Accuracy: 0.7542
- Validation Loss: 0.8791
- Validation End Logits Accuracy: 0.7585
- Validation Start Logits Accuracy: 0.7096
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 632, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.9975 | 0.7354 | 0.6632 | 0.8689 | 0.7719 | 0.7048 | 0 |
| 0.6856 | 0.8145 | 0.7542 | 0.8791 | 0.7585 | 0.7096 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham/10-clustered
|
nandysoham
| 2023-01-29T11:13:03Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T11:08:46Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/10-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/10-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5872
- Train End Logits Accuracy: 0.8242
- Train Start Logits Accuracy: 0.7907
- Validation Loss: 0.7005
- Validation End Logits Accuracy: 0.8237
- Validation Start Logits Accuracy: 0.75
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 310, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.8666 | 0.7496 | 0.7282 | 0.7017 | 0.8173 | 0.7372 | 0 |
| 0.5872 | 0.8242 | 0.7907 | 0.7005 | 0.8237 | 0.75 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
codeslord/LunarLander-v2-PPO
|
codeslord
| 2023-01-29T11:10:16Z | 2 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T11:09:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.48 +/- 25.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nandysoham/9-clustered
|
nandysoham
| 2023-01-29T11:07:07Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T10:54:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/9-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/9-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6059
- Train End Logits Accuracy: 0.8198
- Train Start Logits Accuracy: 0.7982
- Validation Loss: 0.7823
- Validation End Logits Accuracy: 0.7846
- Validation Start Logits Accuracy: 0.7483
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1004, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.8783 | 0.7495 | 0.7205 | 0.7823 | 0.7806 | 0.7463 | 0 |
| 0.6059 | 0.8198 | 0.7982 | 0.7823 | 0.7846 | 0.7483 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham/7-clustered
|
nandysoham
| 2023-01-29T10:48:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T10:44:29Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/7-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/7-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5661
- Train End Logits Accuracy: 0.8387
- Train Start Logits Accuracy: 0.8208
- Validation Loss: 0.8415
- Validation End Logits Accuracy: 0.7506
- Validation Start Logits Accuracy: 0.7506
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.8846 | 0.7393 | 0.7298 | 0.8233 | 0.7506 | 0.7458 | 0 |
| 0.5661 | 0.8387 | 0.8208 | 0.8415 | 0.7506 | 0.7506 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
timtaotao/q-Taxi-v3
|
timtaotao
| 2023-01-29T10:43:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T10:43:27Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="timtaotao/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
timtaotao/q-FrozenLake-v1-4x4-noSlippery
|
timtaotao
| 2023-01-29T10:40:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-01-29T10:40:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="timtaotao/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ikuseiso/Personal_Lora_collections
|
ikuseiso
| 2023-01-29T10:32:37Z | 0 | 50 | null |
[
"text-to-image",
"region:us"
] |
text-to-image
| 2023-01-17T13:15:19Z |
---
pipeline_tag: text-to-image
---
The new display image is generated using ACertainModel.
# latest update - 1/29 (Optimized the dataset and caption, now using a single tag such as "vampy" will make it easier to restore characters and won't affect the replacement of character features and actions. The display model has been changed and other display images will be gradually replaced to reflect the characteristics of lora.)
- [vampy V3](#vampy_V3)
# latest update - 1/22
- [vergil_devil_may_cry](#vergil_devil_may_cry)
- [dante_devil_may_cry1e-4](#dante_devil_may_cry1e-4)
- [sky_striker_ace_-_raye](#sky_striker_ace_-_raye)
- [sky_striker_ace_-_roze](#sky_striker_ace_-_roze)
# NOTICE
My LoRAs will be a slight overfitting,I suggest adjusting the weights to be in the range of 0.6-0.8, and adding some prompts such as hair color or eye color to make better adjustments to the character's actions.
Use weights 1 u'll get more accurate.
For example,left1 and right0.8,(may be disappointing,model is unable to recognize plastic chair.)
<a href="https://imgloc.com/i/OMXfz"><img src="https://i.328888.xyz/2023/01/23/OMXfz.md.png" alt="OMXfz.png" border="0" /></a>
all use danbooru tag like
Miorine_Rembran.safetensors→https://danbooru.donmai.us/wiki_pages/miorine_rembran←miorine_rembran
<s>To use them in your WebUI, please install the extension linked under, following the guide:
https://github.com/kohya-ss/sd-webui-additional-networks</s> (This message is now out of date as WEBUI now supports Lora.)
# Index
- [Miorine_Rembran](#Miorine_Rembran)
- [suletta_mercury](#suletta_mercury)
- [chouzetsusaikawa_tenshi-chan](#chouzetsusaikawa_tenshi-chan)
- [ame-chan_needy_girl_overdose](#ame-chan_needy_girl_overdose)
- [grea_shingeki_no_bahamut](#grea_shingeki_no_bahamut)
- [iono_pokemon](#iono_pokemon)
- [kisara_engage_kiss](#kisara_engage_kiss)
- [laundry_dragonmaid](#laundry_dragonmaid)
- [sky_striker_ace_-_raye](#sky_striker_ace_-_raye)
- [sky_striker_ace_-_roze](#sky_striker_ace_-_roze)
- [lovely_labrynth_of_the_silver_castle](#lovely_labrynth_of_the_silver_castle)
- [lishenna_omen_of_destruction](#lishenna_omen_of_destruction)
- [ralmia_sonic_racer](#ralmia_sonic_racer)
- [seulbi_lee](#seulbi_lee)
- [vampy](#vampy)
- [lucy_cyberpunk](#lucy_cyberpunk)
- [dante_devil_may_cry1e-4](#dante_devil_may_cry1e-4)
- [vergil_devil_may_cry](#vergil_devil_may_cry)
# Concept
- [inverted_nipple](#inverted_nipple)
<summary>Sample Prompt Like</summary>
<pre>
masterpiece, best quality,1girl,solo,cowboy shot,arms behind back,indoors,(SFW),<lora:Miorine_Rembran:1>,Miorine_Rembran
Negative prompt: lowres,text,error,extra digit,low quality,jpeg artifacts,signature,blurry,normal quality,cropped,worst quality,deformity,(bad_prompt_version2:0.8),disfigured,long neck,ugly,black and white,monochrome,greyscale,
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 3434176, Size: 512x768, Model hash: 8ec3e63ea8, Model: AbyssOrangeMix-fp32-no-ema, ENSD: 31337
</pre>
# Miorine_Rembran
<a href="https://imgloc.com/i/OPWdw"><img src="https://i.328888.xyz/2023/01/23/OPWdw.md.png" alt="OPWdw.md.png" border="0"></a>
# suletta_mercury
<a href="https://imgloc.com/i/OP74P"><img src="https://i.328888.xyz/2023/01/23/OP74P.md.png" alt="OP74P.md.png" border="0"></a>
# chouzetsusaikawa_tenshi-chan
<a href="https://imgloc.com/i/OPtIz"><img src="https://i.328888.xyz/2023/01/23/OPtIz.md.png" alt="OPtIz.md.png" border="0"></a>
# ame-chan_needy_girl_overdose
<a href="https://imgloc.com/i/OPENX"><img src="https://i.328888.xyz/2023/01/23/OPENX.md.png" alt="OPENX.md.png" border="0"></a>
# grea_shingeki_no_bahamut
<a href="https://imgloc.com/i/OPnyq"><img src="https://i.328888.xyz/2023/01/23/OPnyq.md.png" alt="OPnyq.md.png" border="0"></a>
# iono_pokemon
<a href="https://imgloc.com/i/OPDvb"><img src="https://i.328888.xyz/2023/01/23/OPDvb.md.png" alt="OPDvb.md.png" border="0"></a>
# kisara_engage_kiss
<a href="https://imgloc.com/i/OPeht"><img src="https://i.328888.xyz/2023/01/23/OPeht.md.png" alt="OPeht.md.png" border="0"></a>
# laundry_dragonmaid
<a href="https://imgloc.com/i/OPAmd"><img src="https://i.328888.xyz/2023/01/23/OPAmd.md.png" alt="OPAmd.md.png" border="0"></a>
# sky_striker_ace_-_raye
<a href="https://imgloc.com/i/OPmTH"><img src="https://i.328888.xyz/2023/01/23/OPmTH.md.png" alt="OPmTH.md.png" border="0"></a>
# sky_striker_ace_-_roze
<a href="https://imgloc.com/i/OPcEF"><img src="https://i.328888.xyz/2023/01/23/OPcEF.md.png" alt="OPcEF.md.png" border="0"></a>
# lovely_labrynth_of_the_silver_castle
<a href="https://imgloc.com/i/OPRHZ"><img src="https://i.328888.xyz/2023/01/23/OPRHZ.md.png" alt="OPRHZ.md.png" border="0"></a>
# lishenna_omen_of_destruction
<a href="https://imgloc.com/i/OPru8"><img src="https://i.328888.xyz/2023/01/23/OPru8.md.png" alt="OPru8.md.png" border="0"></a>
# ralmia_sonic_racer(shadowverse)
may need to update
# seulbi_lee (Closers)
may need to update
# vampy_V3
<a href="https://imgloc.com/i/jPfWH"><img src="https://i.328888.xyz/2023/01/29/jPfWH.png" alt="jPfWH.png" border="0" /></a>
<a href="https://imgloc.com/i/OPNd5"><img src="https://i.328888.xyz/2023/01/23/OPNd5.md.png" alt="OPNd5.md.png" border="0"></a>
# lucy_cyberpunk
<a href="https://imgloc.com/i/OPINy"><img src="https://i.328888.xyz/2023/01/23/OPINy.md.png" alt="OPINy.md.png" border="0"></a>
# dante_devil_may_cry1e-4
(aslo u can use prompts like:stubble and 50years old to get DMC5's Dante)
<a href="https://imgloc.com/i/OPJhz"><img src="https://i.328888.xyz/2023/01/23/OPJhz.md.png" alt="OPJhz.md.png" border="0"></a>
# vergil_devil_may_cry
<a href="https://imgloc.com/i/OPLZw"><img src="https://i.328888.xyz/2023/01/23/OPLZw.md.png" alt="OPLZw.md.png" border="0"></a>
# inverted_nipple
My suggestion is to use I2I,the preview image is NSFW so I cannot provide it.But it's really effective.
|
nandysoham/5-clustered
|
nandysoham
| 2023-01-29T10:30:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T10:28:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/5-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/5-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5941
- Train End Logits Accuracy: 0.8333
- Train Start Logits Accuracy: 0.7955
- Validation Loss: 0.8305
- Validation End Logits Accuracy: 0.7820
- Validation Start Logits Accuracy: 0.7556
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 132, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.9118 | 0.7405 | 0.7093 | 0.8196 | 0.7744 | 0.7556 | 0 |
| 0.5941 | 0.8333 | 0.7955 | 0.8305 | 0.7820 | 0.7556 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ivensamdh/swinv2
|
ivensamdh
| 2023-01-29T10:10:52Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-01-29T13:31:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: swinv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2
This model is a fine-tuned version of [microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-base-patch4-window12to16-192to256-22kto1k-ft) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
nandysoham/1-clustered
|
nandysoham
| 2023-01-29T10:06:07Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T10:03:34Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/1-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/1-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7785
- Train End Logits Accuracy: 0.7917
- Train Start Logits Accuracy: 0.7264
- Validation Loss: 0.9514
- Validation End Logits Accuracy: 0.7734
- Validation Start Logits Accuracy: 0.7014
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 138, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1245 | 0.6957 | 0.6322 | 0.9694 | 0.7590 | 0.6906 | 0 |
| 0.7785 | 0.7917 | 0.7264 | 0.9514 | 0.7734 | 0.7014 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham/0-clustered
|
nandysoham
| 2023-01-29T10:02:36Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-29T09:57:55Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nandysoham/0-clustered
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/0-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7128
- Train End Logits Accuracy: 0.8102
- Train Start Logits Accuracy: 0.7412
- Validation Loss: 0.9487
- Validation End Logits Accuracy: 0.7328
- Validation Start Logits Accuracy: 0.6397
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 326, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0078 | 0.7312 | 0.6503 | 0.9262 | 0.7481 | 0.6382 | 0 |
| 0.7128 | 0.8102 | 0.7412 | 0.9487 | 0.7328 | 0.6397 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KoRiF/ppo-PyramidsTraining
|
KoRiF
| 2023-01-29T09:58:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-01-29T09:58:07Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: KoRiF/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_wnli
|
gokuls
| 2023-01-29T09:49:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T09:48:03Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_wnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3448
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3478 | 1.0 | 5 | 0.3460 | 0.5634 |
| 0.3477 | 2.0 | 10 | 0.3480 | 0.4366 |
| 0.3466 | 3.0 | 15 | 0.3459 | 0.5634 |
| 0.3466 | 4.0 | 20 | 0.3448 | 0.5634 |
| 0.3468 | 5.0 | 25 | 0.3451 | 0.5634 |
| 0.3467 | 6.0 | 30 | 0.3461 | 0.5634 |
| 0.3465 | 7.0 | 35 | 0.3465 | 0.5634 |
| 0.3466 | 8.0 | 40 | 0.3466 | 0.5634 |
| 0.3468 | 9.0 | 45 | 0.3457 | 0.5634 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_stsb
|
gokuls
| 2023-01-29T09:47:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T09:39:00Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.04810618310275214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_stsb
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1407
- Pearson: 0.0533
- Spearmanr: 0.0481
- Combined Score: 0.0507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 1.7607 | 1.0 | 45 | 1.2881 | 0.0340 | 0.0258 | 0.0299 |
| 1.0763 | 2.0 | 90 | 1.1761 | 0.0478 | 0.0438 | 0.0458 |
| 1.0466 | 3.0 | 135 | 1.1550 | 0.0509 | 0.0390 | 0.0450 |
| 1.0685 | 4.0 | 180 | 1.1407 | 0.0533 | 0.0481 | 0.0507 |
| 1.0449 | 5.0 | 225 | 1.1527 | 0.0562 | 0.0478 | 0.0520 |
| 1.0303 | 6.0 | 270 | 1.2257 | 0.0580 | 0.0606 | 0.0593 |
| 1.0006 | 7.0 | 315 | 1.2018 | 0.0711 | 0.0736 | 0.0724 |
| 0.9661 | 8.0 | 360 | 1.2391 | 0.0716 | 0.0848 | 0.0782 |
| 0.9524 | 9.0 | 405 | 1.2005 | 0.0795 | 0.0749 | 0.0772 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
90DPyo/distilbert-base-uncased-finetuned-clinc
|
90DPyo
| 2023-01-29T09:38:52Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T09:32:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_sst2
|
gokuls
| 2023-01-29T09:38:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T08:10:59Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.801605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_sst2
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7778
- Accuracy: 0.8016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5405 | 1.0 | 527 | 1.4225 | 0.5539 |
| 1.3567 | 2.0 | 1054 | 1.4707 | 0.5482 |
| 1.2859 | 3.0 | 1581 | 1.4661 | 0.5677 |
| 1.2563 | 4.0 | 2108 | 1.4136 | 0.5665 |
| 1.2414 | 5.0 | 2635 | 1.4239 | 0.5940 |
| 1.2288 | 6.0 | 3162 | 1.4443 | 0.5745 |
| 0.7679 | 7.0 | 3689 | 0.7870 | 0.7878 |
| 0.4135 | 8.0 | 4216 | 0.7778 | 0.8016 |
| 0.3376 | 9.0 | 4743 | 0.8673 | 0.7993 |
| 0.2972 | 10.0 | 5270 | 0.8790 | 0.7901 |
| 0.2734 | 11.0 | 5797 | 0.9525 | 0.7913 |
| 0.2569 | 12.0 | 6324 | 0.9557 | 0.7936 |
| 0.2431 | 13.0 | 6851 | 0.9595 | 0.7878 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
almuallim/gpt2-turkish-poem-generation
|
almuallim
| 2023-01-29T09:34:40Z | 58 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-29T08:20:24Z |
---
license: openrail
---
Fine-Tuned GPT-2 Model with Turkish Poems Dataset on [Kaggle](https://www.kaggle.com/datasets/bilalelebi/turkish-poems). Big thanks for [gorkemgoknar](https://huggingface.co/gorkemgoknar)
for GPT-2 Turkish [Version](https://huggingface.co/gorkemgoknar/gpt2-small-turkish).
|
KoRiF/ppo-SnowballTarget
|
KoRiF
| 2023-01-29T09:08:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-01-29T09:08:32Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: KoRiF/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
philosucker/xlm-roberta-base-finetuned-panx-en
|
philosucker
| 2023-01-29T08:55:01Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-29T08:50:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7092760180995475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- F1: 0.7093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8727 | 1.0 | 295 | 0.5063 | 0.6186 |
| 0.4633 | 2.0 | 590 | 0.5089 | 0.6561 |
| 0.3075 | 3.0 | 885 | 0.4990 | 0.7093 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1
- Datasets 1.18.4
- Tokenizers 0.13.2
|
philosucker/xlm-roberta-base-finetuned-panx-it
|
philosucker
| 2023-01-29T08:50:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-29T08:45:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.846884028064383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3252
- F1: 0.8469
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6217 | 1.0 | 420 | 0.3396 | 0.7677 |
| 0.3206 | 2.0 | 840 | 0.3433 | 0.8114 |
| 0.1871 | 3.0 | 1260 | 0.3252 | 0.8469 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1
- Datasets 1.18.4
- Tokenizers 0.13.2
|
philosucker/xlm-roberta-base-finetuned-panx-fr
|
philosucker
| 2023-01-29T08:44:56Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-29T08:34:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: train
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.9410517733387689
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- F1: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5718 | 1.0 | 1145 | 0.2821 | 0.8392 |
| 0.3285 | 2.0 | 2290 | 0.2115 | 0.8946 |
| 0.2087 | 3.0 | 3435 | 0.1258 | 0.9411 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1
- Datasets 1.18.4
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_sst2_256
|
gokuls
| 2023-01-29T08:33:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T07:20:51Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_sst2_256
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.7075688073394495
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_sst2_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2641
- Accuracy: 0.7076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5438 | 1.0 | 527 | 1.4012 | 0.5814 |
| 1.364 | 2.0 | 1054 | 1.5474 | 0.5413 |
| 1.2907 | 3.0 | 1581 | 1.5138 | 0.5642 |
| 1.257 | 4.0 | 2108 | 1.4409 | 0.5665 |
| 1.2417 | 5.0 | 2635 | 1.4473 | 0.5929 |
| 1.2056 | 6.0 | 3162 | 1.2641 | 0.7076 |
| 0.6274 | 7.0 | 3689 | nan | 0.4908 |
| 0.0 | 8.0 | 4216 | nan | 0.4908 |
| 0.0 | 9.0 | 4743 | nan | 0.4908 |
| 0.0 | 10.0 | 5270 | nan | 0.4908 |
| 0.0 | 11.0 | 5797 | nan | 0.4908 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kjmann/WormPPO1
|
kjmann
| 2023-01-29T07:54:35Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2023-01-29T07:54:28Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: kjmann/WormPPO1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
huggingtweets/sama
|
huggingtweets
| 2023-01-29T07:33:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-06T00:07:39Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/804990434455887872/BG0Xh7Oa_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sam Altman</div>
<div style="text-align: center; font-size: 14px;">@sama</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sam Altman.
| Data | Sam Altman |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 388 |
| Short tweets | 153 |
| Tweets kept | 2705 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6cl7ldqq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sama's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hi9mhdy4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hi9mhdy4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sama')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gokuls/distilbert_add_pre-training-complete
|
gokuls
| 2023-01-29T07:22:55Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:wikitext",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-28T15:57:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
metrics:
- accuracy
model-index:
- name: distilbert_add_pre-training-complete
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: wikitext wikitext-103-raw-v1
type: wikitext
config: wikitext-103-raw-v1
split: validation
args: wikitext-103-raw-v1
metrics:
- name: Accuracy
type: accuracy
value: 0.23073914743840437
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_pre-training-complete
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikitext wikitext-103-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0239
- Accuracy: 0.2307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.295 | 1.0 | 3573 | 6.0701 | 0.1522 |
| 6.0482 | 2.0 | 7146 | 5.9533 | 0.1565 |
| 5.9799 | 3.0 | 10719 | 5.9008 | 0.1584 |
| 5.9378 | 4.0 | 14292 | 5.8997 | 0.1545 |
| 5.9057 | 5.0 | 17865 | 5.8905 | 0.1536 |
| 5.8811 | 6.0 | 21438 | 5.8646 | 0.1550 |
| 5.8617 | 7.0 | 25011 | 5.8322 | 0.1534 |
| 5.844 | 8.0 | 28584 | 5.8563 | 0.1523 |
| 5.8297 | 9.0 | 32157 | 5.8352 | 0.1548 |
| 5.8175 | 10.0 | 35730 | 5.8136 | 0.1558 |
| 5.8056 | 11.0 | 39303 | 5.8147 | 0.1526 |
| 5.7921 | 12.0 | 42876 | 5.8020 | 0.1548 |
| 5.7777 | 13.0 | 46449 | 5.7891 | 0.1545 |
| 5.7596 | 14.0 | 50022 | 5.7370 | 0.1587 |
| 5.7414 | 15.0 | 53595 | 5.7396 | 0.1604 |
| 5.7243 | 16.0 | 57168 | 5.7490 | 0.1564 |
| 5.6997 | 17.0 | 60741 | 5.7135 | 0.1561 |
| 5.6698 | 18.0 | 64314 | 5.6858 | 0.1620 |
| 5.6398 | 19.0 | 67887 | 5.6735 | 0.1644 |
| 5.6135 | 20.0 | 71460 | 5.6174 | 0.1681 |
| 5.5899 | 21.0 | 75033 | 5.6191 | 0.1684 |
| 5.5699 | 22.0 | 78606 | 5.5977 | 0.1669 |
| 5.5487 | 23.0 | 82179 | 5.6139 | 0.1669 |
| 5.529 | 24.0 | 85752 | 5.5272 | 0.1741 |
| 5.512 | 25.0 | 89325 | 5.5271 | 0.1727 |
| 5.4939 | 26.0 | 92898 | 5.5190 | 0.1721 |
| 5.4765 | 27.0 | 96471 | 5.4824 | 0.1770 |
| 5.4604 | 28.0 | 100044 | 5.5159 | 0.1747 |
| 5.4422 | 29.0 | 103617 | 5.4577 | 0.1807 |
| 5.4243 | 30.0 | 107190 | 5.4546 | 0.1772 |
| 5.408 | 31.0 | 110763 | 5.4297 | 0.1837 |
| 5.3915 | 32.0 | 114336 | 5.4089 | 0.1866 |
| 5.3766 | 33.0 | 117909 | 5.3996 | 0.1848 |
| 5.3594 | 34.0 | 121482 | 5.3974 | 0.1841 |
| 5.3451 | 35.0 | 125055 | 5.3718 | 0.1908 |
| 5.3294 | 36.0 | 128628 | 5.3706 | 0.1878 |
| 5.3155 | 37.0 | 132201 | 5.3677 | 0.1903 |
| 5.2996 | 38.0 | 135774 | 5.2970 | 0.1994 |
| 5.287 | 39.0 | 139347 | 5.3127 | 0.1977 |
| 5.2735 | 40.0 | 142920 | 5.3145 | 0.1955 |
| 5.26 | 41.0 | 146493 | 5.2985 | 0.2017 |
| 5.2487 | 42.0 | 150066 | 5.2661 | 0.2025 |
| 5.2362 | 43.0 | 153639 | 5.2712 | 0.2031 |
| 5.2248 | 44.0 | 157212 | 5.2452 | 0.2049 |
| 5.2115 | 45.0 | 160785 | 5.2325 | 0.2054 |
| 5.1998 | 46.0 | 164358 | 5.2233 | 0.2075 |
| 5.188 | 47.0 | 167931 | 5.1994 | 0.2118 |
| 5.1779 | 48.0 | 171504 | 5.2436 | 0.2069 |
| 5.1664 | 49.0 | 175077 | 5.2203 | 0.2129 |
| 5.1546 | 50.0 | 178650 | 5.1820 | 0.2134 |
| 5.1431 | 51.0 | 182223 | 5.2029 | 0.2122 |
| 5.133 | 52.0 | 185796 | 5.1458 | 0.2140 |
| 5.1226 | 53.0 | 189369 | 5.1757 | 0.2163 |
| 5.1138 | 54.0 | 192942 | 5.1380 | 0.2193 |
| 5.1046 | 55.0 | 196515 | 5.1498 | 0.2178 |
| 5.0984 | 56.0 | 200088 | 5.1094 | 0.2194 |
| 5.0907 | 57.0 | 203661 | 5.1354 | 0.2202 |
| 5.0812 | 58.0 | 207234 | 5.0662 | 0.2256 |
| 5.0748 | 59.0 | 210807 | 5.1163 | 0.2181 |
| 5.067 | 60.0 | 214380 | 5.1193 | 0.2199 |
| 5.0609 | 61.0 | 217953 | 5.0919 | 0.2224 |
| 5.0536 | 62.0 | 221526 | 5.0899 | 0.2239 |
| 5.0491 | 63.0 | 225099 | 5.1125 | 0.2224 |
| 5.0433 | 64.0 | 228672 | 5.0892 | 0.2226 |
| 5.0373 | 65.0 | 232245 | 5.0644 | 0.2260 |
| 5.032 | 66.0 | 235818 | 5.0623 | 0.2253 |
| 5.0283 | 67.0 | 239391 | 5.1004 | 0.2213 |
| 5.0223 | 68.0 | 242964 | 5.0573 | 0.2279 |
| 5.0184 | 69.0 | 246537 | 5.0488 | 0.2271 |
| 5.014 | 70.0 | 250110 | 5.0482 | 0.2280 |
| 5.0102 | 71.0 | 253683 | 5.0600 | 0.2269 |
| 5.0079 | 72.0 | 257256 | 5.0271 | 0.2279 |
| 5.0029 | 73.0 | 260829 | 5.0629 | 0.2267 |
| 4.9994 | 74.0 | 264402 | 5.0304 | 0.2297 |
| 4.9978 | 75.0 | 267975 | 5.0485 | 0.2269 |
| 4.9945 | 76.0 | 271548 | 5.0380 | 0.2306 |
| 4.9917 | 77.0 | 275121 | 5.0590 | 0.2265 |
| 4.9913 | 78.0 | 278694 | 5.0585 | 0.2262 |
| 4.987 | 79.0 | 282267 | 5.0339 | 0.2278 |
| 4.9862 | 80.0 | 285840 | 5.0214 | 0.2305 |
| 4.9841 | 81.0 | 289413 | 5.0393 | 0.2271 |
| 4.983 | 82.0 | 292986 | 5.0200 | 0.2298 |
| 4.9816 | 83.0 | 296559 | 5.0289 | 0.2300 |
| 4.9801 | 83.96 | 300000 | 4.9972 | 0.2332 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
syjung/whisper-small-tuning
|
syjung
| 2023-01-29T07:11:45Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"en",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-29T06:01:35Z |
---
language:
- en
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.5773
- eval_wer: 99.3995
- eval_runtime: 561.7733
- eval_samples_per_second: 1.107
- eval_steps_per_second: 1.107
- epoch: 0.01
- step: 20
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_stsb_128
|
gokuls
| 2023-01-29T07:02:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T06:57:18Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: mobilebert_add_GLUE_Experiment_logit_kd_stsb_128
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.041438738522880283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_stsb_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1505
- Pearson: 0.0470
- Spearmanr: 0.0414
- Combined Score: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.524 | 1.0 | 45 | 1.3607 | -0.0066 | -0.0281 | -0.0174 |
| 1.0877 | 2.0 | 90 | 1.1729 | 0.0446 | 0.0497 | 0.0472 |
| 1.0648 | 3.0 | 135 | 1.1505 | 0.0470 | 0.0414 | 0.0442 |
| 1.0737 | 4.0 | 180 | 1.1564 | 0.0472 | 0.0464 | 0.0468 |
| 1.0445 | 5.0 | 225 | 1.1971 | 0.0529 | 0.0575 | 0.0552 |
| 1.0296 | 6.0 | 270 | 1.1723 | 0.0578 | 0.0727 | 0.0652 |
| 1.026 | 7.0 | 315 | 1.2735 | 0.0621 | 0.0606 | 0.0614 |
| 1.0216 | 8.0 | 360 | 1.2214 | 0.0666 | 0.0700 | 0.0683 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nanashisan/LoRa_pirotess
|
nanashisan
| 2023-01-29T07:00:12Z | 0 | 6 | null |
[
"ja",
"region:us"
] | null | 2023-01-28T09:45:11Z |
---
language:
- ja
---
プロンプト用KeyWord:pirotess
- pirotess, 1girl, solo, pointy ears, dark skin, dark-skinned female, elf, sword, weapon, breasts, long hair, dark elf, circlet, center opening, white hair

|
weikunt/finetuned-ner
|
weikunt
| 2023-01-29T06:47:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-01-28T07:54:47Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ner
This model is a fine-tuned version of [deepset/deberta-v3-base-squad2](https://huggingface.co/deepset/deberta-v3-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4783
- Precision: 0.3264
- Recall: 0.3591
- F1: 0.3420
- Accuracy: 0.8925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.05
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 39.8167 | 1.0 | 760 | 0.3957 | 0.1844 | 0.2909 | 0.2257 | 0.8499 |
| 21.7333 | 2.0 | 1520 | 0.3853 | 0.2118 | 0.3273 | 0.2571 | 0.8546 |
| 13.8859 | 3.0 | 2280 | 0.3631 | 0.2443 | 0.2909 | 0.2656 | 0.8789 |
| 20.6586 | 4.0 | 3040 | 0.3961 | 0.2946 | 0.3455 | 0.3180 | 0.8753 |
| 13.8654 | 5.0 | 3800 | 0.3821 | 0.2791 | 0.3273 | 0.3013 | 0.8877 |
| 12.6942 | 6.0 | 4560 | 0.4393 | 0.3122 | 0.3364 | 0.3239 | 0.8909 |
| 25.0549 | 7.0 | 5320 | 0.4542 | 0.3106 | 0.3727 | 0.3388 | 0.8824 |
| 5.6816 | 8.0 | 6080 | 0.4432 | 0.2820 | 0.3409 | 0.3086 | 0.8774 |
| 13.1296 | 9.0 | 6840 | 0.4509 | 0.2884 | 0.35 | 0.3162 | 0.8824 |
| 7.7173 | 10.0 | 7600 | 0.4265 | 0.3170 | 0.3818 | 0.3464 | 0.8919 |
| 6.7922 | 11.0 | 8360 | 0.4749 | 0.3320 | 0.3818 | 0.3552 | 0.8892 |
| 5.4287 | 12.0 | 9120 | 0.4564 | 0.2917 | 0.3818 | 0.3307 | 0.8805 |
| 7.4153 | 13.0 | 9880 | 0.4735 | 0.2963 | 0.3273 | 0.3110 | 0.8871 |
| 9.1154 | 14.0 | 10640 | 0.4553 | 0.3416 | 0.3773 | 0.3585 | 0.8894 |
| 5.999 | 15.0 | 11400 | 0.4489 | 0.3203 | 0.4091 | 0.3593 | 0.8880 |
| 9.5128 | 16.0 | 12160 | 0.4947 | 0.3164 | 0.3682 | 0.3403 | 0.8883 |
| 5.6713 | 17.0 | 12920 | 0.4705 | 0.3527 | 0.3864 | 0.3688 | 0.8919 |
| 12.2119 | 18.0 | 13680 | 0.4617 | 0.3123 | 0.3591 | 0.3340 | 0.8857 |
| 8.5658 | 19.0 | 14440 | 0.4764 | 0.3092 | 0.35 | 0.3284 | 0.8944 |
| 11.0664 | 20.0 | 15200 | 0.4557 | 0.3187 | 0.3636 | 0.3397 | 0.8905 |
| 6.7161 | 21.0 | 15960 | 0.4468 | 0.3210 | 0.3955 | 0.3544 | 0.8956 |
| 9.0448 | 22.0 | 16720 | 0.5120 | 0.2872 | 0.3682 | 0.3227 | 0.8792 |
| 6.573 | 23.0 | 17480 | 0.4990 | 0.3307 | 0.3773 | 0.3524 | 0.8869 |
| 5.0543 | 24.0 | 18240 | 0.4763 | 0.3028 | 0.3455 | 0.3227 | 0.8899 |
| 6.8797 | 25.0 | 19000 | 0.4814 | 0.2780 | 0.3273 | 0.3006 | 0.8913 |
| 7.7544 | 26.0 | 19760 | 0.4695 | 0.3024 | 0.3409 | 0.3205 | 0.8946 |
| 4.8346 | 27.0 | 20520 | 0.4849 | 0.3154 | 0.3455 | 0.3297 | 0.8931 |
| 4.4766 | 28.0 | 21280 | 0.4809 | 0.2925 | 0.3364 | 0.3129 | 0.8913 |
| 7.9149 | 29.0 | 22040 | 0.4756 | 0.3238 | 0.3591 | 0.3405 | 0.8930 |
| 7.3033 | 30.0 | 22800 | 0.4783 | 0.3264 | 0.3591 | 0.3420 | 0.8925 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/distilbert_add_GLUE_Experiment_logit_kd_mnli_96
|
gokuls
| 2023-01-29T06:20:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-29T03:08:37Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_add_GLUE_Experiment_logit_kd_mnli_96
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
config: mnli
split: validation_matched
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5239015459723352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_add_GLUE_Experiment_logit_kd_mnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5576
- Accuracy: 0.5239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.624 | 1.0 | 1534 | 0.6178 | 0.3605 |
| 0.6176 | 2.0 | 3068 | 0.6138 | 0.3767 |
| 0.6139 | 3.0 | 4602 | 0.6112 | 0.3822 |
| 0.6104 | 4.0 | 6136 | 0.6071 | 0.3977 |
| 0.6027 | 5.0 | 7670 | 0.5978 | 0.4091 |
| 0.5958 | 6.0 | 9204 | 0.6104 | 0.4151 |
| 0.5877 | 7.0 | 10738 | 0.5963 | 0.4517 |
| 0.5787 | 8.0 | 12272 | 0.6054 | 0.4627 |
| 0.5711 | 9.0 | 13806 | 0.5753 | 0.4905 |
| 0.5641 | 10.0 | 15340 | 0.5713 | 0.4987 |
| 0.5583 | 11.0 | 16874 | 0.5645 | 0.5115 |
| 0.5535 | 12.0 | 18408 | 0.5646 | 0.5117 |
| 0.549 | 13.0 | 19942 | 0.5692 | 0.5176 |
| 0.5456 | 14.0 | 21476 | 0.5613 | 0.5220 |
| 0.5425 | 15.0 | 23010 | 0.5584 | 0.5302 |
| 0.5399 | 16.0 | 24544 | 0.5641 | 0.5252 |
| 0.5375 | 17.0 | 26078 | 0.5628 | 0.5260 |
| 0.5353 | 18.0 | 27612 | 0.5659 | 0.5200 |
| 0.533 | 19.0 | 29146 | 0.5676 | 0.5310 |
| 0.5311 | 20.0 | 30680 | 0.5563 | 0.5323 |
| 0.5291 | 21.0 | 32214 | 0.5682 | 0.5250 |
| 0.5274 | 22.0 | 33748 | 0.5661 | 0.5282 |
| 0.5255 | 23.0 | 35282 | 0.5673 | 0.5325 |
| 0.5236 | 24.0 | 36816 | 0.5563 | 0.5416 |
| 0.5219 | 25.0 | 38350 | 0.5703 | 0.5290 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.