repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
shivakanthsujit/basic-mbrl-Hopper-v2_colab_model
|
shivakanthsujit
| null | 6 | 1 |
mbrl-lib
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['mbrl-Hopper-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'mbrl-lib']
| true | true | true | 330 |
# **OneDTransitionRewardModel w/ SACAgent** Agent playing **mbrl-Hopper-v2**
This is a trained model of a **OneDTransitionRewardModel w/ SACAgent** agent playing **mbrl-Hopper-v2**
using [MBRL-Lib](https://github.com/facebookresearch/mbrl-lib).
## Usage (with MBRL-Lib)
TODO: Add your code
```python
from mbrl import ...
...
```
|
itsGanni/Cardinal__Catholicism_-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,860 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Cardinal__Catholicism_-clustered
This model is a fine-tuned version of [nandysoham/11-clustered](https://huggingface.co/nandysoham/11-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4076
- Train End Logits Accuracy: 0.8889
- Train Start Logits Accuracy: 0.9132
- Validation Loss: 0.6765
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.75
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4076 | 0.8889 | 0.9132 | 0.6765 | 0.75 | 0.75 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sushant45/Web_browser-clustered
|
Sushant45
|
distilbert
| 8 | 24 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,863 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Web_browser-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1326
- Train End Logits Accuracy: 0.9792
- Train Start Logits Accuracy: 0.9444
- Validation Loss: 0.3331
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1326 | 0.9792 | 0.9444 | 0.3331 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Human_Development_Index-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,857 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Human_Development_Index-clustered
This model is a fine-tuned version of [nandysoham/4-clustered](https://huggingface.co/nandysoham/4-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3853
- Train End Logits Accuracy: 0.8924
- Train Start Logits Accuracy: 0.9167
- Validation Loss: 0.0903
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3853 | 0.8924 | 0.9167 | 0.0903 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sushant45/Catalan_language-clustered
|
Sushant45
|
distilbert
| 8 | 24 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,871 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5260
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.8576
- Validation Loss: 0.8536
- Validation End Logits Accuracy: 0.7273
- Validation Start Logits Accuracy: 0.9091
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5260 | 0.8611 | 0.8576 | 0.8536 | 0.7273 | 0.9091 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sushant45/Paper-clustered
|
Sushant45
|
distilbert
| 8 | 26 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,856 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Paper-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2701
- Train End Logits Accuracy: 0.9236
- Train Start Logits Accuracy: 0.9306
- Validation Loss: 1.0319
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.75
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2701 | 0.9236 | 0.9306 | 1.0319 | 0.75 | 0.75 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Heresy-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,848 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Heresy-clustered
This model is a fine-tuned version of [nandysoham/11-clustered](https://huggingface.co/nandysoham/11-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2615
- Train End Logits Accuracy: 0.9410
- Train Start Logits Accuracy: 0.9167
- Validation Loss: 1.8742
- Validation End Logits Accuracy: 0.3333
- Validation Start Logits Accuracy: 0.3333
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2615 | 0.9410 | 0.9167 | 1.8742 | 0.3333 | 0.3333 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Sushant45/Adult_contemporary_music-clustered
|
Sushant45
|
distilbert
| 8 | 28 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,879 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sushant45/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2951
- Train End Logits Accuracy: 0.9375
- Train Start Logits Accuracy: 0.9028
- Validation Loss: 0.5855
- Validation End Logits Accuracy: 0.7143
- Validation Start Logits Accuracy: 0.8571
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2951 | 0.9375 | 0.9028 | 0.5855 | 0.7143 | 0.8571 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Warsaw_Pact-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,847 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Warsaw_Pact-clustered
This model is a fine-tuned version of [nandysoham/12-clustered](https://huggingface.co/nandysoham/12-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1248
- Train End Logits Accuracy: 0.9688
- Train Start Logits Accuracy: 0.9514
- Validation Loss: 0.0163
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1248 | 0.9688 | 0.9514 | 0.0163 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Canadian_Armed_Forces-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,874 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Canadian_Armed_Forces-clustered
This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5493
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.7812
- Validation Loss: 0.3839
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.8000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5493 | 0.8611 | 0.7812 | 0.3839 | 1.0 | 0.8000 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Wayback_Machine-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,861 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Wayback_Machine-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3449
- Train End Logits Accuracy: 0.9375
- Train Start Logits Accuracy: 0.8681
- Validation Loss: 0.0082
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3449 | 0.9375 | 0.8681 | 0.0082 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
thesunshine36/FineTune_Vit5_LR0_00001_time3
|
thesunshine36
|
t5
| 5 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,556 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FineTune_Vit5_LR0_00001_time3
This model is a fine-tuned version of [thesunshine36/FineTune_Vit5_LR0_00001_time2](https://huggingface.co/thesunshine36/FineTune_Vit5_LR0_00001_time2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6297
- Validation Loss: 0.5655
- Train Rouge1: 52.5683
- Train Rouge2: 31.3753
- Train Rougel: 44.4344
- Train Rougelsum: 44.4737
- Train Gen Len: 13.6985
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.6297 | 0.5655 | 52.5683 | 31.3753 | 44.4344 | 44.4737 | 13.6985 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Materialism-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,845 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Materialism-clustered
This model is a fine-tuned version of [nandysoham/7-clustered](https://huggingface.co/nandysoham/7-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1445
- Train End Logits Accuracy: 0.9792
- Train Start Logits Accuracy: 0.9618
- Validation Loss: 0.2256
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1445 | 0.9792 | 0.9618 | 0.2256 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Canadian_Armed_Forces-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,873 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Canadian_Armed_Forces-clustered
This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6166
- Train End Logits Accuracy: 0.8333
- Train Start Logits Accuracy: 0.7361
- Validation Loss: 0.2280
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.8000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.6166 | 0.8333 | 0.7361 | 0.2280 | 1.0 | 0.8000 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Pub-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,842 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Pub-clustered
This model is a fine-tuned version of [nandysoham/16-clustered](https://huggingface.co/nandysoham/16-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4233
- Train End Logits Accuracy: 0.8819
- Train Start Logits Accuracy: 0.8507
- Validation Loss: 0.2006
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.9231
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4233 | 0.8819 | 0.8507 | 0.2006 | 1.0 | 0.9231 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Web_browser-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,850 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Web_browser-clustered
This model is a fine-tuned version of [nandysoham/20-clustered](https://huggingface.co/nandysoham/20-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2294
- Train End Logits Accuracy: 0.9618
- Train Start Logits Accuracy: 0.9062
- Validation Loss: 0.2672
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2294 | 0.9618 | 0.9062 | 0.2672 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Catalan_language-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,858 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham/13-clustered](https://huggingface.co/nandysoham/13-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7867
- Train End Logits Accuracy: 0.8125
- Train Start Logits Accuracy: 0.7639
- Validation Loss: 0.4452
- Validation End Logits Accuracy: 0.8182
- Validation Start Logits Accuracy: 0.8182
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.7867 | 0.8125 | 0.7639 | 0.4452 | 0.8182 | 0.8182 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Paper-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,840 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Paper-clustered
This model is a fine-tuned version of [nandysoham/16-clustered](https://huggingface.co/nandysoham/16-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4880
- Train End Logits Accuracy: 0.8715
- Train Start Logits Accuracy: 0.875
- Validation Loss: 0.4717
- Validation End Logits Accuracy: 0.5
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4880 | 0.8715 | 0.875 | 0.4717 | 0.5 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
itsGanni/Adult_contemporary_music-clustered
|
itsGanni
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,866 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# itsGanni/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham/15-clustered](https://huggingface.co/nandysoham/15-clustered) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4347
- Train End Logits Accuracy: 0.9062
- Train Start Logits Accuracy: 0.8715
- Validation Loss: 0.5431
- Validation End Logits Accuracy: 0.7143
- Validation Start Logits Accuracy: 0.7143
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4347 | 0.9062 | 0.8715 | 0.5431 | 0.7143 | 0.7143 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eshanck/my_awesome_wnut_model
|
eshanck
|
distilbert
| 10 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wnut_17']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,445 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3349
- Precision: 0.2540
- Recall: 0.0297
- F1: 0.0531
- Accuracy: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 54 | 0.3619 | 0.0 | 0.0 | 0.0 | 0.9256 |
| No log | 2.0 | 108 | 0.3349 | 0.2540 | 0.0297 | 0.0531 | 0.9283 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Cardinal__Catholicism_-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,875 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Cardinal__Catholicism_-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3017
- Train End Logits Accuracy: 0.9167
- Train Start Logits Accuracy: 0.9306
- Validation Loss: 0.2812
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3017 | 0.9167 | 0.9306 | 0.2812 | 0.75 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Cardinal__Catholicism_-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,873 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Cardinal__Catholicism_-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3039
- Train End Logits Accuracy: 0.9167
- Train Start Logits Accuracy: 0.9444
- Validation Loss: 0.2086
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3039 | 0.9167 | 0.9444 | 0.2086 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Canadian_Armed_Forces-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,871 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Canadian_Armed_Forces-clustered
This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4369
- Train End Logits Accuracy: 0.8785
- Train Start Logits Accuracy: 0.8507
- Validation Loss: 1.3005
- Validation End Logits Accuracy: 0.6000
- Validation Start Logits Accuracy: 0.4000
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4369 | 0.8785 | 0.8507 | 1.3005 | 0.6000 | 0.4000 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
grullborg/g_yuusukeStyle
|
grullborg
| null | 3 | 0 | null | 2 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image', 'lora']
| false | true | true | 1,784 |
# G Yuusuke Style LoRA
## Usage
To use this LoRA you have to download the file, as well as drop it into the "\stable-diffusion-webui\models\Lora" folder
To use it in a prompt, please refer to the extra networks panel in your Automatic1111 webui.
I highly recommend using it at around 0.8 strength for the best results.
It seems to struggle a lot with hands, but I haven't run many tests on it, so maybe my data is biased.
If you'd like to support the amazing artist on whose work this LoRA was trained, I'd highly recommend you check out [Gユウスケ](https://gyuusuke.exblog.jp/).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/eKYZIhY.png width=50% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/Q1T08xG.png width=50% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/TFscR9S.png width=50% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
nandysoham16/Human_Development_Index-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,873 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Human_Development_Index-clustered
This model is a fine-tuned version of [nandysoham16/4-clustered_aug](https://huggingface.co/nandysoham16/4-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1912
- Train End Logits Accuracy: 0.9722
- Train Start Logits Accuracy: 0.9653
- Validation Loss: 0.0747
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1912 | 0.9722 | 0.9653 | 0.0747 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Human_Development_Index-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,875 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Human_Development_Index-clustered
This model is a fine-tuned version of [nandysoham16/4-clustered_aug](https://huggingface.co/nandysoham16/4-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1775
- Train End Logits Accuracy: 0.9722
- Train Start Logits Accuracy: 0.9340
- Validation Loss: 0.7431
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1775 | 0.9722 | 0.9340 | 0.7431 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Zekunli/flan-t5-large-extraction-cnndm_2000-all
|
Zekunli
|
t5
| 10 | 29 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,142 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-extraction-cnndm_2000-all
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7621
- Rouge1: 34.9258
- Rouge2: 15.2218
- Rougel: 29.9813
- Rougelsum: 29.9443
- Gen Len: 18.986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1649 | 0.8 | 200 | 1.8161 | 34.9143 | 14.9085 | 29.8629 | 29.811 | 19.0 |
| 1.9114 | 1.6 | 400 | 1.7713 | 34.8733 | 14.6521 | 29.8186 | 29.7829 | 18.986 |
| 1.7997 | 2.4 | 600 | 1.7917 | 34.1481 | 14.7443 | 29.7078 | 29.6144 | 18.99 |
| 1.7477 | 3.2 | 800 | 1.7771 | 35.0882 | 15.3186 | 29.9749 | 29.9643 | 18.99 |
| 1.6821 | 4.0 | 1000 | 1.7621 | 34.9258 | 15.2218 | 29.9813 | 29.9443 | 18.986 |
| 1.6301 | 4.8 | 1200 | 1.7796 | 34.3705 | 14.8013 | 29.6128 | 29.5457 | 18.99 |
| 1.597 | 5.6 | 1400 | 1.7669 | 35.4342 | 15.7045 | 30.4953 | 30.4293 | 18.99 |
| 1.5543 | 6.4 | 1600 | 1.7857 | 34.5322 | 15.0244 | 29.8476 | 29.7596 | 18.99 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Zekunli/flan-t5-large-extraction-cnndm_4000-all
|
Zekunli
|
t5
| 10 | 27 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,140 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-extraction-cnndm_4000-all
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7290
- Rouge1: 35.0775
- Rouge2: 15.2209
- Rougel: 30.1796
- Rougelsum: 30.1599
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1464 | 0.4 | 200 | 1.8323 | 35.2242 | 15.3495 | 30.142 | 30.1331 | 19.0 |
| 1.9817 | 0.8 | 400 | 1.7729 | 34.3798 | 14.7287 | 29.5447 | 29.6052 | 18.986 |
| 1.8842 | 1.2 | 600 | 1.7602 | 34.5807 | 15.1707 | 29.7768 | 29.8081 | 18.986 |
| 1.8129 | 1.6 | 800 | 1.7629 | 34.5103 | 15.231 | 29.9182 | 29.9333 | 19.0 |
| 1.8238 | 2.0 | 1000 | 1.7290 | 35.0775 | 15.2209 | 30.1796 | 30.1599 | 19.0 |
| 1.7199 | 2.4 | 1200 | 1.7354 | 34.6552 | 15.7256 | 30.1894 | 30.2207 | 18.998 |
| 1.7128 | 2.8 | 1400 | 1.7407 | 34.7198 | 15.5771 | 30.0585 | 30.0442 | 19.0 |
| 1.6816 | 3.2 | 1600 | 1.7508 | 34.9611 | 15.5792 | 30.3518 | 30.3638 | 19.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Zekunli/flan-t5-large-da-multiwoz_500
|
Zekunli
|
t5
| 10 | 61 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,007 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz_500
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3826
- Accuracy: 37.4297
- Num: 3689
- Gen Len: 16.4142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.3527 | 0.47 | 200 | 0.5645 | 25.0872 | 3689 | 12.6606 |
| 0.6276 | 0.93 | 400 | 0.4722 | 31.0261 | 3689 | 15.3814 |
| 0.539 | 1.4 | 600 | 0.4367 | 34.1584 | 3689 | 15.8056 |
| 0.5087 | 1.86 | 800 | 0.4164 | 35.1677 | 3689 | 15.6544 |
| 0.4633 | 2.33 | 1000 | 0.4112 | 34.1615 | 3689 | 15.7842 |
| 0.463 | 2.79 | 1200 | 0.3961 | 36.5992 | 3689 | 16.4803 |
| 0.4437 | 3.26 | 1400 | 0.3895 | 36.7915 | 3689 | 16.5259 |
| 0.4328 | 3.72 | 1600 | 0.3874 | 36.7043 | 3689 | 16.2385 |
| 0.4189 | 4.19 | 1800 | 0.3826 | 37.4297 | 3689 | 16.4142 |
| 0.4239 | 4.65 | 2000 | 0.3804 | 37.2685 | 3689 | 16.2329 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Zekunli/flan-t5-large-da-multiwoz_1000
|
Zekunli
|
t5
| 10 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,561 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz_1000
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3538
- Accuracy: 41.3747
- Num: 3689
- Gen Len: 15.5115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.3315 | 0.24 | 200 | 0.5697 | 25.9543 | 3689 | 14.556 |
| 0.6418 | 0.48 | 400 | 0.4645 | 30.0503 | 3689 | 14.9314 |
| 0.5433 | 0.72 | 600 | 0.4307 | 31.9506 | 3689 | 16.1515 |
| 0.4909 | 0.95 | 800 | 0.4177 | 34.7593 | 3689 | 15.418 |
| 0.4769 | 1.19 | 1000 | 0.3996 | 35.0943 | 3689 | 14.9607 |
| 0.4491 | 1.43 | 1200 | 0.3881 | 36.2741 | 3689 | 15.543 |
| 0.4531 | 1.67 | 1400 | 0.3820 | 35.7704 | 3689 | 14.1583 |
| 0.4322 | 1.91 | 1600 | 0.3726 | 37.4853 | 3689 | 15.961 |
| 0.4188 | 2.15 | 1800 | 0.3699 | 38.4117 | 3689 | 15.0773 |
| 0.4085 | 2.38 | 2000 | 0.3674 | 38.5353 | 3689 | 15.4012 |
| 0.4063 | 2.62 | 2200 | 0.3606 | 40.0046 | 3689 | 15.3546 |
| 0.3977 | 2.86 | 2400 | 0.3570 | 40.6543 | 3689 | 15.704 |
| 0.3992 | 3.1 | 2600 | 0.3549 | 40.4284 | 3689 | 15.7446 |
| 0.3828 | 3.34 | 2800 | 0.3538 | 41.3747 | 3689 | 15.5115 |
| 0.3792 | 3.58 | 3000 | 0.3539 | 39.8513 | 3689 | 14.7951 |
| 0.3914 | 3.81 | 3200 | 0.3498 | 41.0388 | 3689 | 15.4153 |
| 0.3707 | 4.05 | 3400 | 0.3498 | 40.9596 | 3689 | 16.3136 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Deep98/Cardinal__Catholicism_-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,869 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Cardinal__Catholicism_-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3075
- Train End Logits Accuracy: 0.8958
- Train Start Logits Accuracy: 0.9306
- Validation Loss: 1.3105
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.5
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3075 | 0.8958 | 0.9306 | 1.3105 | 0.75 | 0.5 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
zbenmo/poca-SoccerTwos
|
zbenmo
| null | 20 | 727 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: zbenmo/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Zekunli/flan-t5-large-intent-dailydialog_500
|
Zekunli
|
t5
| 10 | 27 |
transformers
| 1 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,469 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-intent-dailydialog_500
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3390
- Accuracy: 47.6586
- Num: 1687
- Gen Len: 4.5691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 0.2512 | 0.49 | 200 | 0.3390 | 47.6586 | 1687 | 4.5691 |
| 0.2108 | 0.97 | 400 | 0.4662 | 43.2128 | 1687 | 4.5762 |
| 0.1785 | 1.46 | 600 | 0.4677 | 44.5169 | 1687 | 4.5975 |
| 0.1644 | 1.95 | 800 | 0.4556 | 45.2875 | 1687 | 4.5987 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
nandysoham16/Heresy-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,864 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1609
- Train End Logits Accuracy: 0.9583
- Train Start Logits Accuracy: 0.9549
- Validation Loss: 0.7543
- Validation End Logits Accuracy: 0.3333
- Validation Start Logits Accuracy: 0.6667
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1609 | 0.9583 | 0.9549 | 0.7543 | 0.3333 | 0.6667 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Heresy-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,860 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1730
- Train End Logits Accuracy: 0.9618
- Train Start Logits Accuracy: 0.9514
- Validation Loss: 0.4811
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.6667
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1730 | 0.9618 | 0.9514 | 0.4811 | 1.0 | 0.6667 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Tune-A-Video-library/birdgif-test
|
Tune-A-Video-library
| null | 3 | 0 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'text-to-video', 'tune-a-video']
| false | true | true | 462 |
# Tune-A-Video - birdgif-test
## Model description
- Base model: [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
- Training prompt: a bird is flapping its wing
## Related papers:
- [Tune-A-Video](https://arxiv.org/abs/2212.11565): One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
- [Stable-Diffusion](https://arxiv.org/abs/2112.10752): High-Resolution Image Synthesis with Latent Diffusion Models
|
arnonl/poca-SoccerTwos
|
arnonl
| null | 30 | 669 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: arnonl/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Deep98/Human_Development_Index-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,870 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Human_Development_Index-clustered
This model is a fine-tuned version of [nandysoham16/4-clustered_aug](https://huggingface.co/nandysoham16/4-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1876
- Train End Logits Accuracy: 0.9757
- Train Start Logits Accuracy: 0.9271
- Validation Loss: 0.6587
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1876 | 0.9757 | 0.9271 | 0.6587 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KoichiYasuoka/deberta-base-japanese-juman-ud-goeswith
|
KoichiYasuoka
|
deberta-v2
| 11 | 43 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['ja']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'wikipedia', 'cc100', 'oscar', 'pos', 'dependency-parsing']
| false | true | true | 630 |
# deberta-base-japanese-juman-ud-goeswith
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia, CC-100, and OSCAR texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-v2-base-japanese](https://huggingface.co/ku-nlp/deberta-v2-base-japanese).
## How to Use
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-base-japanese-juman-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
[fugashi](https://pypi.org/project/fugashi) is required.
|
ishaankul67/Warsaw_Pact-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,862 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Warsaw_Pact-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1010
- Train End Logits Accuracy: 0.9653
- Train Start Logits Accuracy: 0.9826
- Validation Loss: 0.0420
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1010 | 0.9653 | 0.9826 | 0.0420 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Warsaw_Pact-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,863 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Warsaw_Pact-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0828
- Train End Logits Accuracy: 0.9792
- Train Start Logits Accuracy: 0.9826
- Validation Loss: 2.2175
- Validation End Logits Accuracy: 0.0
- Validation Start Logits Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0828 | 0.9792 | 0.9826 | 2.2175 | 0.0 | 0.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Zekunli/flan-t5-large-intent-dailydialog_1000
|
Zekunli
|
t5
| 10 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,619 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-intent-dailydialog_1000
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3254
- Accuracy: 47.0658
- Num: 1687
- Gen Len: 4.6793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 24
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:|
| 1.0256 | 0.24 | 200 | 0.4561 | 30.7054 | 1687 | 4.7184 |
| 0.3036 | 0.47 | 400 | 0.3254 | 47.0658 | 1687 | 4.6793 |
| 0.2725 | 0.71 | 600 | 0.3622 | 44.3391 | 1687 | 4.6414 |
| 0.2595 | 0.95 | 800 | 0.3436 | 45.4653 | 1687 | 4.6959 |
| 0.2303 | 1.18 | 1000 | 0.3742 | 44.1612 | 1687 | 4.4434 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mr-desilva/finetuned-multidial
|
mr-desilva
|
bart
| 12 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['samsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 970 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-multidial
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NoNameFound/pocaresnet-SoccerTwos
|
NoNameFound
| null | 20 | 722 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 851 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: NoNameFound/pocaresnet-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Deep98/Heresy-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,855 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Heresy-clustered
This model is a fine-tuned version of [nandysoham16/11-clustered_aug](https://huggingface.co/nandysoham16/11-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2244
- Train End Logits Accuracy: 0.9479
- Train Start Logits Accuracy: 0.9062
- Validation Loss: 0.4860
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2244 | 0.9479 | 0.9062 | 0.4860 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Materialism-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,860 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Materialism-clustered
This model is a fine-tuned version of [nandysoham16/7-clustered_aug](https://huggingface.co/nandysoham16/7-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0534
- Train End Logits Accuracy: 0.9965
- Train Start Logits Accuracy: 0.9896
- Validation Loss: 0.3066
- Validation End Logits Accuracy: 0.5
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0534 | 0.9965 | 0.9896 | 0.3066 | 0.5 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Materialism-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,861 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Materialism-clustered
This model is a fine-tuned version of [nandysoham16/7-clustered_aug](https://huggingface.co/nandysoham16/7-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0800
- Train End Logits Accuracy: 0.9931
- Train Start Logits Accuracy: 0.9792
- Validation Loss: 0.0644
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0800 | 0.9931 | 0.9792 | 0.0644 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Pub-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,860 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Pub-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3332
- Train End Logits Accuracy: 0.9028
- Train Start Logits Accuracy: 0.9062
- Validation Loss: 0.5714
- Validation End Logits Accuracy: 0.7692
- Validation Start Logits Accuracy: 0.7692
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3332 | 0.9028 | 0.9062 | 0.5714 | 0.7692 | 0.7692 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Warsaw_Pact-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,857 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Warsaw_Pact-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1156
- Train End Logits Accuracy: 0.9688
- Train Start Logits Accuracy: 0.9757
- Validation Loss: 0.1900
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1156 | 0.9688 | 0.9757 | 0.1900 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Pub-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,861 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Pub-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3450
- Train End Logits Accuracy: 0.9028
- Train Start Logits Accuracy: 0.8854
- Validation Loss: 0.2950
- Validation End Logits Accuracy: 0.8462
- Validation Start Logits Accuracy: 0.9231
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3450 | 0.9028 | 0.8854 | 0.2950 | 0.8462 | 0.9231 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
s8n29/finetuned_model_1
|
s8n29
|
deberta-v2
| 11 | 14 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,109 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_model_1
This model is a fine-tuned version of [deepset/deberta-v3-base-squad2](https://huggingface.co/deepset/deberta-v3-base-squad2) on a subset of SQuAD 2.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 308 | 0.0850 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Laurie/billsum_t5_model
|
Laurie
|
t5
| 14 | 9 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['billsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,692 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_t5_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5045
- Rouge1: 0.1393
- Rouge2: 0.0511
- Rougel: 0.117
- Rougelsum: 0.1171
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8011 | 0.1314 | 0.0398 | 0.111 | 0.1107 | 19.0 |
| No log | 2.0 | 124 | 2.5850 | 0.1371 | 0.049 | 0.1157 | 0.1158 | 19.0 |
| No log | 3.0 | 186 | 2.5221 | 0.1407 | 0.0531 | 0.1184 | 0.1186 | 19.0 |
| No log | 4.0 | 248 | 2.5045 | 0.1393 | 0.0511 | 0.117 | 0.1171 | 19.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ishaankul67/Web_browser-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,865 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Web_browser-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1934
- Train End Logits Accuracy: 0.9861
- Train Start Logits Accuracy: 0.9167
- Validation Loss: 0.2436
- Validation End Logits Accuracy: 0.6667
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1934 | 0.9861 | 0.9167 | 0.2436 | 0.6667 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Web_browser-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,863 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Web_browser-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1876
- Train End Logits Accuracy: 0.9792
- Train Start Logits Accuracy: 0.9375
- Validation Loss: 0.0125
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1876 | 0.9792 | 0.9375 | 0.0125 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MrDivakaruni/a2c-AntBulletEnv-v0
|
MrDivakaruni
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Deep98/Materialism-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,855 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Materialism-clustered
This model is a fine-tuned version of [nandysoham16/7-clustered_aug](https://huggingface.co/nandysoham16/7-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0705
- Train End Logits Accuracy: 0.9896
- Train Start Logits Accuracy: 0.9722
- Validation Loss: 0.2530
- Validation End Logits Accuracy: 0.5
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0705 | 0.9896 | 0.9722 | 0.2530 | 0.5 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Catalan_language-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,873 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5041
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.8368
- Validation Loss: 1.9634
- Validation End Logits Accuracy: 0.6364
- Validation Start Logits Accuracy: 0.8182
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.5041 | 0.8611 | 0.8368 | 1.9634 | 0.6364 | 0.8182 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Catalan_language-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,871 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6225
- Train End Logits Accuracy: 0.8646
- Train Start Logits Accuracy: 0.8472
- Validation Loss: 0.2992
- Validation End Logits Accuracy: 0.9091
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.6225 | 0.8646 | 0.8472 | 0.2992 | 0.9091 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Pub-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,855 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Pub-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3841
- Train End Logits Accuracy: 0.8993
- Train Start Logits Accuracy: 0.8576
- Validation Loss: 0.2110
- Validation End Logits Accuracy: 0.9231
- Validation Start Logits Accuracy: 0.8462
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3841 | 0.8993 | 0.8576 | 0.2110 | 0.9231 | 0.8462 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ishaankul67/Paper-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,858 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Paper-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2903
- Train End Logits Accuracy: 0.9167
- Train Start Logits Accuracy: 0.9271
- Validation Loss: 0.7340
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.75
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.2903 | 0.9167 | 0.9271 | 0.7340 | 0.75 | 0.75 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Paper-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,858 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Paper-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3349
- Train End Logits Accuracy: 0.8854
- Train Start Logits Accuracy: 0.9132
- Validation Loss: 0.4416
- Validation End Logits Accuracy: 0.75
- Validation Start Logits Accuracy: 0.5
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3349 | 0.8854 | 0.9132 | 0.4416 | 0.75 | 0.5 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
thesunshine36/FineTune_Vit5_LR0_00001_time4
|
thesunshine36
|
t5
| 5 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,556 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FineTune_Vit5_LR0_00001_time4
This model is a fine-tuned version of [thesunshine36/FineTune_Vit5_LR0_00001_time2](https://huggingface.co/thesunshine36/FineTune_Vit5_LR0_00001_time2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5291
- Validation Loss: 0.5800
- Train Rouge1: 52.3493
- Train Rouge2: 30.7526
- Train Rougel: 43.8269
- Train Rougelsum: 43.8511
- Train Gen Len: 14.3808
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.5291 | 0.5800 | 52.3493 | 30.7526 | 43.8269 | 43.8511 | 14.3808 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Web_browser-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,857 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Web_browser-clustered
This model is a fine-tuned version of [nandysoham16/20-clustered_aug](https://huggingface.co/nandysoham16/20-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1604
- Train End Logits Accuracy: 0.9826
- Train Start Logits Accuracy: 0.9375
- Validation Loss: 0.0757
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.1604 | 0.9826 | 0.9375 | 0.0757 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
antoooooine/poca-SoccerTwos
|
antoooooine
| null | 21 | 722 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: antoooooine/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ishaankul67/Adult_contemporary_music-clustered
|
ishaankul67
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,878 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ishaankul67/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3734
- Train End Logits Accuracy: 0.9167
- Train Start Logits Accuracy: 0.8889
- Validation Loss: 0.1582
- Validation End Logits Accuracy: 0.8571
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3734 | 0.9167 | 0.8889 | 0.1582 | 0.8571 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
nandysoham16/Adult_contemporary_music-clustered
|
nandysoham16
|
distilbert
| 8 | 10 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,876 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham16/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3351
- Train End Logits Accuracy: 0.8993
- Train Start Logits Accuracy: 0.8854
- Validation Loss: 0.1132
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3351 | 0.8993 | 0.8854 | 0.1132 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Catalan_language-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,865 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Catalan_language-clustered
This model is a fine-tuned version of [nandysoham16/13-clustered_aug](https://huggingface.co/nandysoham16/13-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6678
- Train End Logits Accuracy: 0.8229
- Train Start Logits Accuracy: 0.8333
- Validation Loss: 0.3377
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.9091
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.6678 | 0.8229 | 0.8333 | 0.3377 | 1.0 | 0.9091 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Deep98/Paper-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,851 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Paper-clustered
This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4183
- Train End Logits Accuracy: 0.8611
- Train Start Logits Accuracy: 0.8785
- Validation Loss: 0.2040
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.4183 | 0.8611 | 0.8785 | 0.2040 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
neithangurthang/q-FrozenLake-v1-4x4-noSlippery
|
neithangurthang
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 404 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="neithangurthang/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MrDivakaruni/a2c-PandaReachDense-v2
|
MrDivakaruni
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Deep98/Adult_contemporary_music-clustered
|
Deep98
|
distilbert
| 8 | 0 |
transformers
| 0 |
question-answering
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,870 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Deep98/Adult_contemporary_music-clustered
This model is a fine-tuned version of [nandysoham16/15-clustered_aug](https://huggingface.co/nandysoham16/15-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3270
- Train End Logits Accuracy: 0.8993
- Train Start Logits Accuracy: 0.8958
- Validation Loss: 0.0751
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.3270 | 0.8993 | 0.8958 | 0.0751 | 1.0 | 1.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
xuancaiqisehua/icefall_asr_tal-csasr_conv_emformer_transducer_stateless2
|
xuancaiqisehua
| null | 22 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,135 |
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/874.
### Pre-trained conv_emformer_transducer_stateless2 models for the TAL_CSASR dataset with icefall.
The model was trained on the far data of TAL_CSASR with the scripts in icefall based on the latest version k2.
You can use the trained model to export it to ncnn and run it with sherpa-ncnn.
### Training procedure
- Install k2 : https://k2.readthedocs.io/en/latest/installation/index.html
- Install lhotse : https://lhotse.readthedocs.io/en/latest/getting-started.html#installation
- Clone icefall : https://github.com/k2-fsa/icefall
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
- Preparing data.
```
cd egs/tal_csasr_conv_emformer/ASR
bash ./prepare.sh
```
- Training
```
bash run.sh
```
- Evaluation results
The decoding results (CER%) on TAL_CSASR(dev and test) are listed below:
|decoding-method|epoch(iter) |avg| dev|test|
|----|---|---|---|---|
|fast_beam_search | 6 | 3 | 11.36 | 11.37|
- Export model to ncnn
reference : https://k2-fsa.github.io/icefall/model-export/export-ncnn.html
```
./conv_emformer_transducer_stateless2/export-for-ncnn.py \
--exp-dir exp_conv_emformer \
--lang_dir data/lang_char \
--epoch 5 \
--iter 8000 \
--avg 3 \
--use-averaged-model 1 \
--num-encoder-layers 12 \
--chunk-length 32 \
--cnn-module-kernel 31 \
--left-context-length 32 \
--right-context-length 8 \
--memory-size 32
```
- Export torchscript model via pnnx
```
pnnx ./encoder_jit_trace-pnnx.pt
pnnx ./decoder_jit_trace-pnnx.pt
pnnx ./joiner_jit_trace-pnnx.pt
```
- Modify the following two lines in your encoder_jit_trace-pnnx.ncnn.param file.

- Then you can use the following code to test the converted models.
```
model/tokens.txt \
model/encoder_jit_trace-pnnx.ncnn.param \
model/encoder_jit_trace-pnnx.ncnn.bin \
model/decoder_jit_trace-pnnx.ncnn.param \
model/decoder_jit_trace-pnnx.ncnn.bin \
model/joiner_jit_trace-pnnx.ncnn.param \
model/joiner_jit_trace-pnnx.ncnn.bin \
test_wavs/0.wav
```
|
evanarlian/wav2vec2-xls-r-164m-id
|
evanarlian
|
wav2vec2
| 11 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null |
['evanarlian/common_voice_11_0_id_filtered']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,335 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-164m-id
This model is a fine-tuned version of [evanarlian/distil-wav2vec2-xls-r-164m-id](https://huggingface.co/evanarlian/distil-wav2vec2-xls-r-164m-id) on the evanarlian/common_voice_11_0_id_filtered dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2865
- Wer: 0.2923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 80.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4047 | 4.59 | 5000 | 1.0167 | 0.9138 |
| 0.587 | 9.18 | 10000 | 0.4639 | 0.5615 |
| 0.3782 | 13.77 | 15000 | 0.3375 | 0.4496 |
| 0.2867 | 18.37 | 20000 | 0.2881 | 0.4022 |
| 0.2519 | 22.96 | 25000 | 0.2775 | 0.3700 |
| 0.1941 | 27.55 | 30000 | 0.2701 | 0.3516 |
| 0.1727 | 32.14 | 35000 | 0.2795 | 0.3486 |
| 0.1448 | 36.73 | 40000 | 0.2878 | 0.3364 |
| 0.1251 | 41.32 | 45000 | 0.2649 | 0.3275 |
| 0.113 | 45.91 | 50000 | 0.2862 | 0.3168 |
| 0.0994 | 50.51 | 55000 | 0.2798 | 0.3091 |
| 0.0938 | 55.1 | 60000 | 0.2864 | 0.3070 |
| 0.0853 | 59.69 | 65000 | 0.2860 | 0.3069 |
| 0.0724 | 64.28 | 70000 | 0.2994 | 0.3003 |
| 0.0723 | 68.87 | 75000 | 0.2951 | 0.2983 |
| 0.0666 | 73.46 | 80000 | 0.2886 | 0.2941 |
| 0.0659 | 78.05 | 85000 | 0.2865 | 0.2923 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
Kaludi/Quick-Summarization
|
Kaludi
|
pegasus
| 8 | 7 |
transformers
| 0 |
summarization
| true | false | false | null |
['en']
|
['Kaludi/data-quick-summarization']
|
{'emissions': 460.6785690944488}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization']
| false | true | true | 1,143 |
# Quick Summarization
This is a Text Summarization Model that has been trained by [Kaludi](https://huggingface.co/Kaludi) to Transform long and complex texts into concise and meaningful summaries. Get a quick and accurate overview of any document in seconds, saving you time and effort.
### Gradio
Tis model supports a [Gradio](https://github.com/gradio-app/gradio) Web UI to run the data-food-classification model:
[](https://huggingface.co/spaces/Kaludi/Quick-Summarizer_App)
## Validation Metrics
- Loss: 1.629
- Rouge1: 41.066
- Rouge2: 19.231
- RougeL: 28.295
- RougeLsum: 37.746
- Gen Len: 98.873
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Kaludi/autotrain-quik-sum-3280991391
```
|
apatidar0/anil_bert-finetuned-ner
|
apatidar0
|
bert
| 12 | 12 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,523 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# anil_bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9352
- Recall: 0.9517
- F1: 0.9434
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0897 | 1.0 | 1756 | 0.0690 | 0.9246 | 0.9325 | 0.9285 | 0.9820 |
| 0.0329 | 2.0 | 3512 | 0.0629 | 0.9301 | 0.9492 | 0.9395 | 0.9862 |
| 0.0172 | 3.0 | 5268 | 0.0610 | 0.9352 | 0.9517 | 0.9434 | 0.9862 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BachNgoH/Reinforce-CartPol-v1
|
BachNgoH
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MikkelGodsk/Reinforce-Cartpole-v1
|
MikkelGodsk
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Abdou/vit-swin-base-224-gpt2-image-captioning
|
Abdou
|
vision-encoder-decoder
| 16 | 27 |
transformers
| 0 |
image-to-text
| true | false | false |
mit
|
['en']
|
['coco']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,880 |
# vit-swin-base-224-gpt2-image-captioning
This model is a fine-tuned [VisionEncoderDecoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) model on 60% of the [COCO2014](https://huggingface.co/datasets/HuggingFaceM4/COCO) dataset.
It achieves the following results on the testing set:
- Loss: 0.7989
- Rouge1: 53.1153
- Rouge2: 24.2307
- Rougel: 51.5002
- Rougelsum: 51.4983
- Bleu: 17.7765
## Model description
The model was initialized on [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) as the vision encoder, the [gpt2](https://huggingface.co/gpt2) as the decoder.
## Intended uses & limitations
You can use this model for image captioning only.
## How to use
You can either use the simple pipeline API:
```python
from transformers import pipeline
image_captioner = pipeline("image-to-text", model="Abdou/vit-swin-base-224-gpt2-image-captioning")
# infer the caption
caption = image_captioner("http://images.cocodataset.org/test-stuff2017/000000000019.jpg")[0]['generated_text']
print(f"caption: {caption}")
```
Or initialize everything for more flexibility:
```python
from transformers import VisionEncoderDecoderModel, GPT2TokenizerFast, ViTImageProcessor
import torch
# a function to perform inference
def get_caption(model, image_processor, tokenizer, image_path):
image = load_image(image_path)
# preprocess the image
img = image_processor(image, return_tensors="pt").to(device)
# generate the caption (using greedy decoding by default)
output = model.generate(**img)
# decode the output
caption = tokenizer.batch_decode(output, skip_special_tokens=True)[0]
return caption
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the fine-tuned image captioning model and corresponding tokenizer and image processor
model = VisionEncoderDecoderModel.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning").to(device)
tokenizer = GPT2TokenizerFast.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning")
image_processor = ViTImageProcessor.from_pretrained("Abdou/vit-swin-base-224-gpt2-image-captioning")
# target image
url = "http://images.cocodataset.org/test-stuff2017/000000000019.jpg"
# get the caption
caption = get_caption(model, image_processor, tokenizer, url)
print(f"caption: {caption}")
```
Output:
```
Two cows laying in a field with a sky background.
```
## Training procedure
You can check [this guide](https://www.thepythoncode.com/article/image-captioning-with-pytorch-and-transformers-in-python) to learn how this model was fine-tuned.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|
| 1.0018 | 0.38 | 2000 | 0.8860 | 38.6537 | 13.8145 | 35.3932 | 35.3935 | 8.2448 | 11.2946 |
| 0.8827 | 0.75 | 4000 | 0.8395 | 40.0458 | 14.8829 | 36.5321 | 36.5366 | 9.1169 | 11.2946 |
| 0.8378 | 1.13 | 6000 | 0.8140 | 41.2736 | 15.9576 | 37.5504 | 37.5512 | 9.871 | 11.2946 |
| 0.7913 | 1.51 | 8000 | 0.8012 | 41.6642 | 16.1987 | 37.8786 | 37.8891 | 10.0786 | 11.2946 |
| 0.7794 | 1.89 | 10000 | 0.7933 | 41.9119 | 16.3738 | 38.1062 | 38.1292 | 10.288 | 11.2946 |
Total training time: ~5 hours on NVIDIA A100 GPU.
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
shahidul034/chatGPT_classifier
|
shahidul034
|
distilbert
| 8 | 2 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,247 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chatGPT_classifier
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0721
- Train Accuracy: 0.9805
- Validation Loss: 0.2447
- Validation Accuracy: 0.9114
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1929, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4382 | 0.7957 | 0.3053 | 0.8745 | 0 |
| 0.2381 | 0.9120 | 0.2671 | 0.8967 | 1 |
| 0.1154 | 0.9617 | 0.2447 | 0.9114 | 2 |
| 0.0701 | 0.9817 | 0.2447 | 0.9114 | 3 |
| 0.0723 | 0.9796 | 0.2447 | 0.9114 | 4 |
| 0.0730 | 0.9802 | 0.2447 | 0.9114 | 5 |
| 0.0706 | 0.9813 | 0.2447 | 0.9114 | 6 |
| 0.0711 | 0.9811 | 0.2447 | 0.9114 | 7 |
| 0.0697 | 0.9817 | 0.2447 | 0.9114 | 8 |
| 0.0721 | 0.9805 | 0.2447 | 0.9114 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bhpardo/clasificador-amazonproducts
|
bhpardo
|
bert
| 10 | 9 |
transformers
| 0 |
text-classification
| true | false | false | null | null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,388 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-amazonproducts
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2425
- Accuracy: 0.5775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7097 | 1.0 | 658 | 1.1479 | 0.5704 |
| 0.4787 | 2.0 | 1316 | 1.2425 | 0.5775 |
| 0.3708 | 3.0 | 1974 | 1.2425 | 0.5775 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
thesunshine36/FineTune_Vit5_LR0_000001_time1
|
thesunshine36
|
t5
| 5 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,503 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FineTune_Vit5_LR0_000001_time1
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8667
- Validation Loss: 0.8405
- Train Rouge1: 45.0606
- Train Rouge2: 20.4988
- Train Rougel: 34.5672
- Train Rougelsum: 34.6023
- Train Gen Len: 13.7325
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 1.8667 | 0.8405 | 45.0606 | 20.4988 | 34.5672 | 34.6023 | 13.7325 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
vyacharin/ppo-Huggy
|
vyacharin
| null | 32 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 820 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: vyacharin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chaewonlee/xlm-roberta-base-finetuned-panx-de
|
chaewonlee
|
xlm-roberta
| 16 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Scrwed/poca-SoccerTwos-long
|
Scrwed
| null | 20 | 692 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 845 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Scrwed/poca-SoccerTwos-long
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lora-library/simbatheog
|
lora-library
| null | 41 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 509 |
# LoRA DreamBooth - simbatheog
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: disney lion cub




|
brand25/q-FrozenLake-v1-4x4-noSlippery
|
brand25
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 396 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="brand25/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LowGI/STT_Model_3
|
LowGI
|
wav2vec2
| 9 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STT_Model_3
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
keonju/korean_disease_ner
|
keonju
|
bert
| 24 | 9 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,182 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean_disease_ner
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0855
- Precision: 0.9424
- Recall: 0.9475
- F1: 0.9449
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0663 | 1.0 | 15954 | 0.0599 | 0.9417 | 0.9246 | 0.9331 | 0.9763 |
| 0.0471 | 2.0 | 31908 | 0.0514 | 0.9408 | 0.9442 | 0.9425 | 0.9795 |
| 0.0384 | 3.0 | 47862 | 0.0511 | 0.9419 | 0.9471 | 0.9445 | 0.9802 |
| 0.0292 | 4.0 | 63816 | 0.0558 | 0.9456 | 0.9449 | 0.9453 | 0.9804 |
| 0.0253 | 5.0 | 79770 | 0.0572 | 0.9421 | 0.9507 | 0.9464 | 0.9807 |
| 0.0225 | 6.0 | 95724 | 0.0649 | 0.9474 | 0.9435 | 0.9454 | 0.9805 |
| 0.0209 | 7.0 | 111678 | 0.0695 | 0.9409 | 0.9504 | 0.9456 | 0.9805 |
| 0.019 | 8.0 | 127632 | 0.0742 | 0.9431 | 0.9469 | 0.9450 | 0.9802 |
| 0.0178 | 9.0 | 143586 | 0.0799 | 0.9425 | 0.9477 | 0.9451 | 0.9802 |
| 0.016 | 10.0 | 159540 | 0.0855 | 0.9424 | 0.9475 | 0.9449 | 0.9801 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KKHyun/distilbert-base-uncased-finetuned-squad
|
KKHyun
|
distilbert
| 12 | 0 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,284 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2096 | 1.0 | 5533 | 1.1505 |
| 0.952 | 2.0 | 11066 | 1.1238 |
| 0.7347 | 3.0 | 16599 | 1.1664 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
LowGI/STT_Model_4
|
LowGI
|
wav2vec2
| 9 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,182 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# STT_Model_4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2311
- Wer: 0.1373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4196 | 5.68 | 500 | 0.9866 | 0.6983 |
| 0.3696 | 11.36 | 1000 | 0.8788 | 0.4010 |
| 0.1182 | 17.05 | 1500 | 0.2187 | 0.1947 |
| 0.0658 | 22.73 | 2000 | 0.2578 | 0.1757 |
| 0.0421 | 28.41 | 2500 | 0.2178 | 0.1609 |
| 0.0346 | 34.09 | 3000 | 0.2038 | 0.1584 |
| 0.0285 | 39.77 | 3500 | 0.2187 | 0.1594 |
| 0.0228 | 45.45 | 4000 | 0.2114 | 0.1445 |
| 0.0262 | 51.14 | 4500 | 0.2201 | 0.1631 |
| 0.0162 | 56.82 | 5000 | 0.2078 | 0.1424 |
| 0.0135 | 62.5 | 5500 | 0.1989 | 0.1393 |
| 0.0128 | 68.18 | 6000 | 0.2118 | 0.1410 |
| 0.0104 | 73.86 | 6500 | 0.2158 | 0.1361 |
| 0.0081 | 79.55 | 7000 | 0.2154 | 0.1348 |
| 0.0067 | 85.23 | 7500 | 0.2107 | 0.1358 |
| 0.0067 | 90.91 | 8000 | 0.2161 | 0.1373 |
| 0.0056 | 96.59 | 8500 | 0.2311 | 0.1373 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BachNgoH/Reinforce-Pixelcopter-PLE-v0
|
BachNgoH
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
prajdabre/CreoleM2M
|
prajdabre
|
mbart
| 9 | 48 |
transformers
| 0 |
text2text-generation
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 2,737 |
This is the CreoleM2M model. If you know, you know!
Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("prajdabre/CreoleM2M", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("prajdabre/CreoleM2M", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("prajdabre/CreoleM2M")
# Or use model = MBartForConditionalGeneration.from_pretrained("prajdabre/CreoleM2M")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ["<s>", "</s>", "<2acf>", "<2eng>", "<2bis>", "<2bzj>", "<2cbk>", "<2crs>", "<2djk>", "<2gul>", "<2hat>", "<2hwc>", "<2icr>", "<2jam>", "<2kri>", "<2ktu>", "<2mbf>", "<2mfe>", "<2mkn>", "<2pap>", "<2pcm>", "<2pis>", "<2rop>", "<2sag>", "<2srm>", "<2srn>", "<2tcs>", "<2tdt>", "<2tpi>"]
# First tokenize the input and outputs. The format below is how CreoleM2M was trained so the input should be "Sentence </s> <2xxx>" where xxx is the language code. Similarly, the output should be "<2yyy> Sentence </s>".
inp = tokenizer('Wen dey wen stretch him out fo whip him real hard , Paul wen tell da captain dat stay dea , “ Dis okay in da rules fo da Rome peopo ? fo you fo whip one guy dat get da same rights jalike da Rome peopo ? even one guy dat neva do notting wrong ? ' </s> <2hwc>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=60, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<eng>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output)
```
Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to let logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore I use the AlbertTokenizer class and not the MBartTokenizer class.
|
gabriellabollici/modelo-muchocine
|
gabriellabollici
|
electra
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,361 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelo-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3273
- Accuracy: 0.4181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5467 | 0.3355 |
| 1.5099 | 2.0 | 776 | 1.2819 | 0.4065 |
| 1.2196 | 3.0 | 1164 | 1.3273 | 0.4181 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MikkelGodsk/Reinforce-PixelCopter-PLE-v0
|
MikkelGodsk
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
RamonAnkersmit/poca-SoccerTwos-Gpu
|
RamonAnkersmit
| null | 24 | 689 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 852 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: RamonAnkersmit/poca-SoccerTwos-Gpu
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Phantom-Artist/phantom-diffusion
|
Phantom-Artist
| null | 69 | 0 | null | 17 | null | false | false | false |
creativeml-openrail-m
|
['en', 'ja']
|
['Phantom-Artist/phantom-diffusion-dataset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['a']
| false | true | true | 5,188 |
20 models that are trained over [hakurei's Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion).
Each model was trained on a notable Japanese AI artist's works using dreambooth, with 30 of their works gained mainly from twitter (except for aibanousagi, which I could find only 23).
It tooks 1000 steps to train each model, at a learning rate of 5-e6. I have shared the notebook [here](https://huggingface.co/Phantom-Artist/phantom-diffusion/blob/main/DreamBooth_Stable_Diffusion_works_with_SD_2.ipynb) in case you want to try out additional training.
Some are good at backgrounds, while others can generate a semi-realistic style as seen in many SD 1.5 anime/waifu models (and some might be a bit overfitted).
The dataset is published [here](https://huggingface.co/datasets/Phantom-Artist/phantom-diffusion-dataset).
# For those who are against generative AIs
You can see that now they are the target. Take our tool. Now is the time for pay back. Generate the images in their styles, and bring back the power you had to yourself.
# For those who support the development of generative AIs
Some of the AI artists, even though they take advantage of the open strategy of Stable Diffusion, now tend to hide their prompts, trying to monopolize their style (I'm not saying the AI artists I trained are as such, to be sure).
To continue protecting our values and beliefs on the open community and fight against them trying to create another pre-modern style guilds, I will show you a new way.
You no longer need their prompts; just train their images by yourself to protect the open community. It's not only legal but also ethical, as they have been taking advantages of others' trained dataset.
# trained artist list
- 852wa
- aibanousagi
- aioeoekakino
- airhara
- alfredplpl
- callimiya
- citrus
- elessenar
- kiri
- korocon
- lakeside
- maccha
- natsuku
- nikaido
- plat
- roiyaruRIZ
- swingwings
- tuinositone
- yunyalula
- yuyuyu
# samples
The basic prompt is as follows, but some of them may have additional postive tags (such as "in the style of") to get the result below (yes, use ``aitop (ARTIST)_style`` to gain the finetuned result).
```
POS: masterpiece, best quality, 1girl, aitop (ARTIST)_style
NEG: nsfw, worst quality, low quality, medium quality, deleted, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digits, fewer digits, cropped, jpeg artifacts, signature, watermark, username, blurry, simple background
```
## 852wa

## aibanousagi

## aioeoekakino

## airhara

## alfredplpl

## callimiya

## citrus

## elessenar

## kiri

## korocon

## lakeside

## maccha

## natsuku



## nikaido

## plat

## roiyaruRIZ

## swingwings

## tuinositone

## yunyalula


## yuyuyu

|
brand25/q-Taxi-v3
|
brand25
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 363 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="brand25/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
erniechiew/Reinforce-Pixelcopter-PLE-v0
|
erniechiew
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Beegbrain/Reinforce-model-cartpole1
|
Beegbrain
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lora-library/a-photo-of-simbatheog
|
lora-library
| null | 29 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 538 |
# LoRA DreamBooth - a-photo-of-simbatheog
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: A photo of simbatheog in a bucket




|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.