modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Ayham/robertagpt2_cnn | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64_3e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_3e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Ayham/robertagpt2_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="maximerosano/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayham/robertagpt2_xsum4 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: kjmann/SnowballTargetPPO
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Ayham/xlnet_bert_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="maximerosano/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayham/xlnet_distilgpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
---
A diffusion model for anime picture.
Based on Acertainty.ckpt.
|
Ayham/xlnet_gpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64_4e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_4e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Ayham/xlnet_gpt2_summarization_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64_5e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_5e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Ayham/xlnet_gpt_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
---
This repo just contains experimental training code for long-t5x. |
Ayham/xlnet_roberta_new_summarization_cnn_dailymail | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- caner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-v2.1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: caner
type: caner
config: default
split: train[5%:6%]
args: default
metrics:
- name: Precision
type: precision
value: 0.8599439775910365
- name: Recall
type: recall
value: 0.8611500701262272
- name: F1
type: f1
value: 0.8605466012613876
- name: Accuracy
type: accuracy
value: 0.948203842940685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-v2.1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the caner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
- Precision: 0.8599
- Recall: 0.8612
- F1: 0.8605
- Accuracy: 0.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2352 | 1.0 | 3228 | 0.3782 | 0.8478 | 0.8359 | 0.8418 | 0.9348 |
| 0.1572 | 2.0 | 6456 | 0.3229 | 0.8696 | 0.8513 | 0.8604 | 0.9461 |
| 0.0994 | 3.0 | 9684 | 0.3598 | 0.8599 | 0.8612 | 0.8605 | 0.9482 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Ayjayo/DialoGPT-medium-AyjayoAI | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
--- |
Ayoola/cdial-yoruba-test | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"has_space"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="FnSK4R17s/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayoola/pytorch_model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
- vi
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is based on Protogenx3.4 ann finetuned by UnD Style
# Model Details
This model is based on Protogenx3.4 ann finetuned by UnD Style
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Dao Trung Thanh
- **Shared by [optional]:**
- **Model type:**
- **Language(s) (NLP):** [More Information Needed]
- **License:**
- **Finetuned from model [optional]:** Protogen x3.4
|
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-finetuned-gest-pred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-gest-pred
This model is a fine-tuned version of [elastic/distilbert-base-cased-finetuned-conll03-english](https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8427
- Precision: 0.5633
- Recall: 0.69
- F1: 0.6202
- Accuracy: 0.8062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 147 | 1.1758 | 0.3705 | 0.515 | 0.4310 | 0.7100 |
| No log | 2.0 | 294 | 0.8818 | 0.4424 | 0.595 | 0.5075 | 0.7640 |
| No log | 3.0 | 441 | 0.7698 | 0.5192 | 0.675 | 0.5870 | 0.7965 |
| 1.1436 | 4.0 | 588 | 0.7188 | 0.5118 | 0.65 | 0.5727 | 0.8004 |
| 1.1436 | 5.0 | 735 | 0.7449 | 0.4869 | 0.65 | 0.5567 | 0.8101 |
| 1.1436 | 6.0 | 882 | 0.8018 | 0.5697 | 0.695 | 0.6261 | 0.8010 |
| 0.3837 | 7.0 | 1029 | 0.7854 | 0.5212 | 0.675 | 0.5882 | 0.7991 |
| 0.3837 | 8.0 | 1176 | 0.7992 | 0.5714 | 0.7 | 0.6292 | 0.8192 |
| 0.3837 | 9.0 | 1323 | 0.8413 | 0.5622 | 0.7 | 0.6236 | 0.8095 |
| 0.3837 | 10.0 | 1470 | 0.8427 | 0.5633 | 0.69 | 0.6202 | 0.8062 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Ayou/chinese_mobile_bert | [
"pytorch",
"mobilebert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"MobileBertForMaskedLM"
],
"model_type": "mobilebert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="FnSK4R17s/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ayran/DialoGPT-medium-harry-potter-1-through-3 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: AkeyLegalBert6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AkeyLegalBert6
This model is a fine-tuned version of [hatemestinbejaia/AkeyLegalBert](https://huggingface.co/hatemestinbejaia/AkeyLegalBert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.3875 | 1.0 | 18422 | 3.5239 |
| 3.44 | 2.0 | 36844 | 3.4214 |
| 3.4738 | 3.0 | 55266 | 3.3597 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Ayran/DialoGPT-small-gandalf | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: bsd-3-clause
pipeline_tag: image-to-image
---
# RealESRGAN MtG
Fine-tuned RealESRGAN_x2plus model trained on MtG Card Art intended for upscaling Scryfall art crops with built-in rosetta/halftone artifact removal and preservation of art style.
<img src="https://huggingface.co/rullaf/RealESRGAN_MtG/resolve/main/examples/comparison.jpg" alt="Comparison between RealESRGAN_x2plus and RealESRGAN_x2plus_mtg_v1">
|
Ayu/Shiriro | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64x2_5e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64x2_5e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-ALBERT | [
"pytorch",
"albert",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.59 +/- 0.19
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad | [
"pytorch",
"electra",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"ElectraForQuestionAnswering"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-27T09:04:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- caner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-v2.2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: caner
type: caner
config: default
split: train[90%:91%]
args: default
metrics:
- name: Precision
type: precision
value: 0.8822751322751323
- name: Recall
type: recall
value: 0.8496815286624204
- name: F1
type: f1
value: 0.8656716417910448
- name: Accuracy
type: accuracy
value: 0.942741116751269
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-v2.2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the caner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Precision: 0.8823
- Recall: 0.8497
- F1: 0.8657
- Accuracy: 0.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2726 | 1.0 | 3228 | 0.4504 | 0.7390 | 0.7287 | 0.7338 | 0.9107 |
| 0.2057 | 2.0 | 6456 | 0.3679 | 0.8633 | 0.8446 | 0.8538 | 0.9385 |
| 0.1481 | 3.0 | 9684 | 0.3595 | 0.8823 | 0.8497 | 0.8657 | 0.9427 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-XLNet | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLNetForQuestionAnsweringSimple"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-mini2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 9.1817
- Validation Loss: 9.2699
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2964 | 9.9957 | 0 |
| 9.6631 | 9.6437 | 1 |
| 9.1817 | 9.2699 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: kjmann/PyramidsPPO
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Azuris/DialoGPT-medium-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
model-index:
- name: mbart-en-to-amr
results: []
language:
- en
inference: false
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MBART to "translate" English to AMR
To be used with MBART text-to-AMR pipeline (not yet available). Won't work with the default MBartTokenizer so you have to be patient
until the pipeline can become available. :-)
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Accuracy: 0.9361
- Bleu: 73.0667
- Smatch Precision: 0.8451
- Smatch Recall: 0.9039
- Smatch Fscore: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Bleu | Smatch Precision | Smatch Recall | Smatch Fscore | Ratio Invalid Amrs |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|:----------------:|:-------------:|:-------------:|:------------------:|
| 1.5295 | 0.23 | 100 | 1.2218 | 0.7536 | 55.6802 | 0.2551 | 0.6184 | 0.3612 | 86.6434 |
| 0.978 | 0.46 | 200 | 0.7306 | 0.8311 | 63.4299 | 0.3785 | 0.7182 | 0.4957 | 87.0499 |
| 0.7187 | 0.69 | 300 | 0.5882 | 0.8484 | 66.1354 | 0.5547 | 0.8481 | 0.6707 | 93.9605 |
| 0.599 | 0.92 | 400 | 0.4560 | 0.8820 | 68.3221 | 0.6594 | 0.8046 | 0.7248 | 89.7213 |
| 0.4842 | 1.15 | 500 | 0.3876 | 0.8904 | 69.5185 | 0.7491 | 0.8648 | 0.8028 | 92.2764 |
| 0.4311 | 1.38 | 600 | 0.3653 | 0.8913 | 69.7931 | 0.7589 | 0.8922 | 0.8202 | 92.6249 |
| 0.4171 | 1.61 | 700 | 0.3244 | 0.9084 | 70.8940 | 0.7759 | 0.8615 | 0.8165 | 90.0697 |
| 0.3769 | 1.84 | 800 | 0.2995 | 0.9128 | 71.0933 | 0.7824 | 0.8688 | 0.8234 | 89.6051 |
| 0.3121 | 2.07 | 900 | 0.2869 | 0.9177 | 71.3434 | 0.8014 | 0.8831 | 0.8403 | 89.8955 |
| 0.2843 | 2.3 | 1000 | 0.2755 | 0.9190 | 71.2255 | 0.8021 | 0.8766 | 0.8377 | 89.3148 |
| 0.3013 | 2.53 | 1100 | 0.2876 | 0.9140 | 70.8907 | 0.7391 | 0.8503 | 0.7908 | 86.0627 |
| 0.3192 | 2.76 | 1200 | 0.2674 | 0.9206 | 71.4303 | 0.8179 | 0.8991 | 0.8566 | 89.7793 |
| 0.3032 | 2.99 | 1300 | 0.2597 | 0.9230 | 71.6003 | 0.7791 | 0.8794 | 0.8262 | 89.1405 |
| 0.2367 | 3.23 | 1400 | 0.2933 | 0.9148 | 71.7204 | 0.8318 | 0.8935 | 0.8615 | 91.4634 |
| 0.247 | 3.46 | 1500 | 0.2505 | 0.9272 | 72.1396 | 0.8224 | 0.8889 | 0.8543 | 89.0244 |
| 0.2326 | 3.69 | 1600 | 0.2467 | 0.9284 | 72.0828 | 0.8257 | 0.8992 | 0.8609 | 88.8502 |
| 0.2622 | 3.92 | 1700 | 0.2590 | 0.9236 | 71.9205 | 0.8231 | 0.8902 | 0.8553 | 90.0697 |
| 0.1935 | 4.15 | 1800 | 0.2528 | 0.9281 | 72.0722 | 0.8523 | 0.9075 | 0.8790 | 88.7340 |
| 0.2067 | 4.38 | 1900 | 0.2480 | 0.9287 | 72.2628 | 0.8322 | 0.9062 | 0.8677 | 89.1405 |
| 0.2248 | 4.61 | 2000 | 0.2520 | 0.9273 | 72.4493 | 0.8474 | 0.9023 | 0.8740 | 89.5470 |
| 0.2049 | 4.84 | 2100 | 0.2403 | 0.9316 | 72.3463 | 0.8231 | 0.8998 | 0.8598 | 88.0952 |
| 0.1942 | 5.07 | 2200 | 0.2482 | 0.9314 | 72.4402 | 0.8291 | 0.8987 | 0.8625 | 89.6051 |
| 0.1796 | 5.3 | 2300 | 0.2587 | 0.9319 | 72.6028 | 0.8349 | 0.8955 | 0.8642 | 88.0952 |
| 0.1852 | 5.53 | 2400 | 0.2550 | 0.9316 | 72.4129 | 0.8435 | 0.9002 | 0.8710 | 88.4437 |
| 0.1898 | 5.76 | 2500 | 0.2493 | 0.9321 | 72.5551 | 0.8269 | 0.8957 | 0.8599 | 87.7468 |
| 0.1861 | 5.99 | 2600 | 0.2459 | 0.9314 | 72.4291 | 0.8565 | 0.9085 | 0.8817 | 88.2695 |
| 0.1568 | 6.22 | 2700 | 0.2487 | 0.9321 | 72.5308 | 0.8582 | 0.9122 | 0.8844 | 88.1533 |
| 0.1491 | 6.45 | 2800 | 0.2461 | 0.9331 | 72.6714 | 0.8632 | 0.9154 | 0.8885 | 88.5598 |
| 0.1437 | 6.68 | 2900 | 0.2434 | 0.9330 | 72.6621 | 0.8699 | 0.9097 | 0.8893 | 88.2695 |
| 0.1504 | 6.91 | 3000 | 0.2496 | 0.9341 | 72.7762 | 0.8544 | 0.9021 | 0.8776 | 87.5726 |
| 0.1313 | 7.14 | 3100 | 0.2510 | 0.9339 | 72.7713 | 0.8674 | 0.9048 | 0.8857 | 88.0372 |
| 0.1501 | 7.37 | 3200 | 0.2502 | 0.9343 | 72.7488 | 0.8633 | 0.9016 | 0.8820 | 88.3275 |
| 0.1295 | 7.6 | 3300 | 0.2459 | 0.9348 | 72.6939 | 0.8365 | 0.8969 | 0.8657 | 87.9210 |
| 0.1262 | 7.83 | 3400 | 0.2524 | 0.9318 | 72.8235 | 0.8509 | 0.9077 | 0.8784 | 88.3275 |
| 0.1072 | 8.06 | 3500 | 0.2551 | 0.9346 | 72.7323 | 0.8566 | 0.9022 | 0.8788 | 88.2695 |
| 0.1198 | 8.29 | 3600 | 0.2549 | 0.9350 | 72.8186 | 0.8638 | 0.9099 | 0.8862 | 88.0372 |
| 0.1175 | 8.52 | 3700 | 0.2581 | 0.9331 | 72.6339 | 0.8624 | 0.9054 | 0.8834 | 88.5598 |
| 0.1173 | 8.75 | 3800 | 0.2508 | 0.9357 | 73.1089 | 0.8515 | 0.9057 | 0.8778 | 87.9791 |
| 0.1208 | 8.98 | 3900 | 0.2542 | 0.9335 | 72.8848 | 0.8416 | 0.8972 | 0.8685 | 87.9210 |
| 0.0874 | 9.22 | 4000 | 0.2580 | 0.9350 | 72.9432 | 0.8532 | 0.9052 | 0.8784 | 88.0372 |
| 0.1019 | 9.45 | 4100 | 0.2615 | 0.9351 | 72.8476 | 0.8704 | 0.9024 | 0.8862 | 87.9791 |
| 0.1039 | 9.68 | 4200 | 0.2635 | 0.9331 | 72.8678 | 0.8432 | 0.8900 | 0.8660 | 87.2242 |
| 0.0986 | 9.91 | 4300 | 0.2588 | 0.9352 | 72.9545 | 0.8548 | 0.9078 | 0.8805 | 87.7468 |
| 0.0867 | 10.14 | 4400 | 0.2659 | 0.9347 | 72.8253 | 0.8574 | 0.9025 | 0.8794 | 87.9210 |
| 0.1029 | 10.37 | 4500 | 0.2651 | 0.9350 | 72.9023 | 0.8480 | 0.9042 | 0.8752 | 87.8630 |
| 0.0935 | 10.6 | 4600 | 0.2669 | 0.9344 | 72.8549 | 0.8438 | 0.8981 | 0.8701 | 87.9791 |
| 0.0944 | 10.83 | 4700 | 0.2703 | 0.9334 | 72.8564 | 0.8460 | 0.9021 | 0.8732 | 87.3403 |
| 0.0724 | 11.06 | 4800 | 0.2712 | 0.9349 | 72.9326 | 0.8435 | 0.9010 | 0.8713 | 88.2695 |
| 0.0906 | 11.29 | 4900 | 0.2708 | 0.9351 | 72.8490 | 0.8513 | 0.9062 | 0.8779 | 87.8049 |
| 0.0819 | 11.52 | 5000 | 0.2683 | 0.9356 | 72.8973 | 0.8304 | 0.9056 | 0.8664 | 87.9791 |
| 0.0892 | 11.75 | 5100 | 0.2704 | 0.9361 | 72.9746 | 0.8463 | 0.9069 | 0.8755 | 87.9210 |
| 0.0791 | 11.98 | 5200 | 0.2705 | 0.9353 | 72.9050 | 0.8475 | 0.9064 | 0.8759 | 88.1533 |
| 0.0718 | 12.21 | 5300 | 0.2751 | 0.9361 | 73.0216 | 0.8546 | 0.9035 | 0.8783 | 87.9791 |
| 0.0744 | 12.44 | 5400 | 0.2769 | 0.9355 | 72.9041 | 0.8717 | 0.9063 | 0.8886 | 87.9210 |
| 0.081 | 12.67 | 5500 | 0.2735 | 0.9359 | 72.9850 | 0.8501 | 0.9092 | 0.8786 | 87.8630 |
| 0.0757 | 12.9 | 5600 | 0.2778 | 0.9359 | 72.9826 | 0.8639 | 0.9133 | 0.8879 | 88.2114 |
| 0.0648 | 13.13 | 5700 | 0.2876 | 0.9333 | 72.9175 | 0.8587 | 0.9111 | 0.8841 | 88.2114 |
| 0.0738 | 13.36 | 5800 | 0.2782 | 0.9360 | 73.0831 | 0.8647 | 0.9144 | 0.8888 | 88.1533 |
| 0.0653 | 13.59 | 5900 | 0.2803 | 0.9354 | 73.0048 | 0.8628 | 0.9120 | 0.8867 | 88.2695 |
| 0.0717 | 13.82 | 6000 | 0.2792 | 0.9359 | 73.0330 | 0.8387 | 0.9033 | 0.8698 | 87.8049 |
| 0.071 | 14.06 | 6100 | 0.2787 | 0.9363 | 73.0967 | 0.8527 | 0.9070 | 0.8790 | 87.9210 |
| 0.0661 | 14.29 | 6200 | 0.2828 | 0.9361 | 73.0762 | 0.8482 | 0.9068 | 0.8765 | 87.8630 |
| 0.062 | 14.52 | 6300 | 0.2812 | 0.9361 | 73.0716 | 0.8399 | 0.9070 | 0.8722 | 87.6887 |
| 0.0722 | 14.75 | 6400 | 0.2808 | 0.9361 | 73.0682 | 0.8377 | 0.9032 | 0.8692 | 87.5145 |
| 0.0633 | 14.98 | 6500 | 0.2811 | 0.9361 | 73.0667 | 0.8451 | 0.9039 | 0.8735 | 87.6307 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BE/demo-sentiment2021 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T09:56:12Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: kobart_4_5.6e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_4_5.6e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9891
- Rouge1: 35.4597
- Rouge2: 12.0824
- Rougel: 23.0161
- Bleu1: 29.793
- Bleu2: 16.882
- Bleu3: 9.6468
- Bleu4: 5.3654
- Gen Len: 50.6014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:-------:|
| 2.3968 | 0.47 | 5000 | 2.9096 | 32.7469 | 10.9679 | 21.4954 | 27.0594 | 15.1133 | 8.4503 | 4.564 | 48.5501 |
| 2.2338 | 0.94 | 10000 | 2.8002 | 33.2148 | 11.5121 | 22.7066 | 26.4886 | 15.0125 | 8.5792 | 4.8523 | 41.1049 |
| 1.9652 | 1.42 | 15000 | 2.7699 | 34.4269 | 11.8551 | 22.8478 | 28.2628 | 16.0909 | 9.0427 | 4.9254 | 46.9744 |
| 2.001 | 1.89 | 20000 | 2.7201 | 34.157 | 11.8683 | 22.6775 | 28.3593 | 16.1361 | 9.221 | 4.8616 | 46.979 |
| 1.6433 | 2.36 | 25000 | 2.7901 | 33.6354 | 11.5761 | 22.6878 | 27.6475 | 15.6571 | 8.8372 | 4.8672 | 43.9953 |
| 1.6204 | 2.83 | 30000 | 2.7724 | 34.9611 | 12.1606 | 23.0246 | 29.1014 | 16.6689 | 9.3661 | 5.1916 | 48.8811 |
| 1.2955 | 3.3 | 35000 | 2.8970 | 35.896 | 12.7037 | 23.3781 | 29.9701 | 17.3963 | 10.2978 | 5.9339 | 49.5921 |
| 1.3501 | 3.78 | 40000 | 2.8854 | 35.2981 | 12.1133 | 23.1845 | 29.483 | 16.7795 | 9.4124 | 5.2042 | 48.5897 |
| 1.0865 | 4.25 | 45000 | 2.9912 | 35.581 | 12.5145 | 23.2262 | 29.9364 | 17.2064 | 10.0427 | 5.62 | 48.31 |
| 1.052 | 4.72 | 50000 | 2.9891 | 35.4597 | 12.0824 | 23.0161 | 29.793 | 16.882 | 9.6468 | 5.3654 | 50.6014 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BJTK2/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T09:59:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wl-disease
model-index:
- name: WL_DISEASE_NER_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WL_DISEASE_NER_v1
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the wl-disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1489
- Diso Precision: 0.7908
- Diso Recall: 0.8397
- Diso F1: 0.8145
- Diso Number: 1765
- Overall Precision: 0.7908
- Overall Recall: 0.8397
- Overall F1: 0.8145
- Overall Accuracy: 0.9631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Diso Precision | Diso Recall | Diso F1 | Diso Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1199 | 1.0 | 1714 | 0.1187 | 0.7739 | 0.7972 | 0.7854 | 1765 | 0.7739 | 0.7972 | 0.7854 | 0.9610 |
| 0.0916 | 2.0 | 3428 | 0.1237 | 0.7748 | 0.8266 | 0.7999 | 1765 | 0.7748 | 0.8266 | 0.7999 | 0.9620 |
| 0.0625 | 3.0 | 5142 | 0.1343 | 0.7900 | 0.8289 | 0.8090 | 1765 | 0.7900 | 0.8289 | 0.8090 | 0.9630 |
| 0.0485 | 4.0 | 6856 | 0.1489 | 0.7908 | 0.8397 | 0.8145 | 1765 | 0.7908 | 0.8397 | 0.8145 | 0.9631 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BSC-LT/gpt2-large-bne | [
"pytorch",
"gpt2",
"text-generation",
"es",
"dataset:bne",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- caner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-v2.4
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: caner
type: caner
config: default
split: train[67%:68%]
args: default
metrics:
- name: Precision
type: precision
value: 0.7851099830795262
- name: Recall
type: recall
value: 0.8226950354609929
- name: F1
type: f1
value: 0.8034632034632034
- name: Accuracy
type: accuracy
value: 0.9542217700915565
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-v2.4
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the caner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2474
- Precision: 0.7851
- Recall: 0.8227
- F1: 0.8035
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2792 | 1.0 | 3228 | 0.3349 | 0.7862 | 0.7695 | 0.7778 | 0.9436 |
| 0.1694 | 2.0 | 6456 | 0.2701 | 0.7996 | 0.7996 | 0.7996 | 0.9491 |
| 0.1244 | 3.0 | 9684 | 0.2474 | 0.7851 | 0.8227 | 0.8035 | 0.9542 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BSC-LT/roberta-base-ca | [
"pytorch",
"roberta",
"fill-mask",
"ca",
"transformers",
"masked-lm",
"BERTa",
"catalan",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mxbonn/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BW/TEST | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- bleu
model-index:
- name: 5ep-5e-5-0.01n-lit
results: []
inference: false
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MBART to "translate" English, Spanish and Dutch to AMR
To be used with MBART text-to-AMR pipeline (not yet available). Won't work with the default MBartTokenizer so you have to be patient until the pipeline can become available. :-)
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3301
- Accuracy: 0.9180
- Bleu: 69.9161
- Smatch Precision: 0.8088
- Smatch Recall: 0.8878
- Smatch Fscore: 0.8465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Bleu | Smatch Precision | Smatch Recall | Smatch Fscore | Ratio Invalid Amrs |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------:|:----------------:|:-------------:|:-------------:|:------------------:|
| 1.8405 | 0.08 | 100 | 1.5336 | 0.6884 | 48.1355 | 0.2062 | 0.5313 | 0.2970 | 79.5780 |
| 1.2656 | 0.15 | 200 | 0.9595 | 0.7799 | 58.3189 | 0.3576 | 0.6405 | 0.4589 | 93.7863 |
| 0.996 | 0.23 | 300 | 0.7541 | 0.8240 | 62.2701 | 0.5021 | 0.7736 | 0.6089 | 94.4638 |
| 0.9236 | 0.31 | 400 | 0.6923 | 0.8298 | 63.3524 | 0.5014 | 0.8177 | 0.6216 | 94.1928 |
| 0.8074 | 0.38 | 500 | 0.5592 | 0.8643 | 65.1618 | 0.6030 | 0.7911 | 0.6843 | 91.1343 |
| 0.693 | 0.46 | 600 | 0.5499 | 0.8628 | 65.7325 | 0.5453 | 0.8207 | 0.6552 | 92.0441 |
| 0.6427 | 0.54 | 700 | 0.4840 | 0.8761 | 67.3134 | 0.6329 | 0.8262 | 0.7167 | 91.6957 |
| 0.6001 | 0.61 | 800 | 0.4613 | 0.8792 | 67.1435 | 0.6296 | 0.8464 | 0.7221 | 91.8893 |
| 0.6801 | 0.69 | 900 | 0.4265 | 0.8889 | 66.8917 | 0.6635 | 0.8356 | 0.7397 | 90.9408 |
| 0.5366 | 0.77 | 1000 | 0.4399 | 0.8899 | 66.8936 | 0.6430 | 0.8182 | 0.7201 | 89.1792 |
| 0.5446 | 0.84 | 1100 | 0.4050 | 0.8953 | 67.5782 | 0.6612 | 0.8492 | 0.7435 | 90.7859 |
| 0.4848 | 0.92 | 1200 | 0.4026 | 0.8955 | 68.3245 | 0.6778 | 0.8531 | 0.7554 | 90.2245 |
| 0.4988 | 1.0 | 1300 | 0.3909 | 0.8955 | 68.1950 | 0.6829 | 0.8673 | 0.7641 | 90.7859 |
| 0.4567 | 1.07 | 1400 | 0.3825 | 0.9013 | 68.2347 | 0.6899 | 0.8669 | 0.7683 | 89.7019 |
| 0.4563 | 1.15 | 1500 | 0.3800 | 0.9014 | 68.2520 | 0.7375 | 0.8664 | 0.7967 | 90.4181 |
| 0.4386 | 1.23 | 1600 | 0.3806 | 0.8996 | 68.1288 | 0.7334 | 0.8768 | 0.7987 | 91.1537 |
| 0.4377 | 1.3 | 1700 | 0.3814 | 0.8968 | 67.6520 | 0.7182 | 0.8380 | 0.7735 | 88.5792 |
| 0.4477 | 1.38 | 1800 | 0.3781 | 0.8986 | 68.1177 | 0.7376 | 0.8763 | 0.8010 | 91.8118 |
| 0.4254 | 1.46 | 1900 | 0.3578 | 0.9062 | 68.5803 | 0.7173 | 0.8635 | 0.7836 | 89.7406 |
| 0.451 | 1.53 | 2000 | 0.3569 | 0.9061 | 68.9853 | 0.7563 | 0.8708 | 0.8095 | 90.3020 |
| 0.3828 | 1.61 | 2100 | 0.3579 | 0.9050 | 68.7272 | 0.7712 | 0.8733 | 0.8191 | 90.3600 |
| 0.4147 | 1.69 | 2200 | 0.3545 | 0.9067 | 69.0921 | 0.7690 | 0.8786 | 0.8201 | 90.6504 |
| 0.3699 | 1.76 | 2300 | 0.3546 | 0.9059 | 69.2822 | 0.7562 | 0.8774 | 0.8123 | 90.6117 |
| 0.3651 | 1.84 | 2400 | 0.3468 | 0.9098 | 70.1585 | 0.7761 | 0.8737 | 0.8220 | 89.9148 |
| 0.3831 | 1.92 | 2500 | 0.3431 | 0.9101 | 69.0716 | 0.7619 | 0.8721 | 0.8133 | 89.8180 |
| 0.3676 | 1.99 | 2600 | 0.3447 | 0.9098 | 69.8364 | 0.7814 | 0.8765 | 0.8262 | 90.2245 |
| 0.3281 | 2.07 | 2700 | 0.3443 | 0.9097 | 69.1463 | 0.8037 | 0.8804 | 0.8403 | 90.4762 |
| 0.3471 | 2.15 | 2800 | 0.3407 | 0.9116 | 69.2910 | 0.7662 | 0.8763 | 0.8175 | 89.6245 |
| 0.327 | 2.22 | 2900 | 0.3414 | 0.9118 | 69.8713 | 0.7725 | 0.8752 | 0.8207 | 89.7213 |
| 0.3232 | 2.3 | 3000 | 0.3386 | 0.9129 | 69.4165 | 0.7666 | 0.8765 | 0.8179 | 89.5277 |
| 0.3168 | 2.38 | 3100 | 0.3593 | 0.9051 | 69.1672 | 0.7736 | 0.8824 | 0.8244 | 90.9408 |
| 0.2781 | 2.45 | 3200 | 0.3408 | 0.9127 | 69.2028 | 0.7767 | 0.8720 | 0.8216 | 89.1599 |
| 0.3135 | 2.53 | 3300 | 0.3374 | 0.9131 | 69.3979 | 0.7611 | 0.8760 | 0.8146 | 89.6438 |
| 0.326 | 2.61 | 3400 | 0.3336 | 0.9128 | 69.9662 | 0.7916 | 0.8815 | 0.8341 | 89.9923 |
| 0.3258 | 2.68 | 3500 | 0.3329 | 0.9139 | 69.7743 | 0.7925 | 0.8796 | 0.8338 | 89.8180 |
| 0.3159 | 2.76 | 3600 | 0.3377 | 0.9130 | 69.6988 | 0.7845 | 0.8877 | 0.8329 | 89.6825 |
| 0.3178 | 2.84 | 3700 | 0.3303 | 0.9142 | 69.6296 | 0.7787 | 0.8780 | 0.8254 | 88.8308 |
| 0.3144 | 2.91 | 3800 | 0.3310 | 0.9137 | 69.3942 | 0.7895 | 0.8815 | 0.8330 | 89.3728 |
| 0.3098 | 2.99 | 3900 | 0.3298 | 0.9151 | 69.8902 | 0.8011 | 0.8732 | 0.8356 | 89.1986 |
| 0.3005 | 3.07 | 4000 | 0.3334 | 0.9154 | 69.6235 | 0.7845 | 0.8819 | 0.8303 | 89.3922 |
| 0.2716 | 3.14 | 4100 | 0.3319 | 0.9154 | 69.4647 | 0.8098 | 0.8797 | 0.8433 | 89.4890 |
| 0.2801 | 3.22 | 4200 | 0.3329 | 0.9151 | 69.5338 | 0.8019 | 0.8851 | 0.8415 | 89.6825 |
| 0.2721 | 3.3 | 4300 | 0.3327 | 0.9153 | 69.6714 | 0.8028 | 0.8885 | 0.8435 | 89.6051 |
| 0.2607 | 3.37 | 4400 | 0.3310 | 0.9157 | 69.5581 | 0.7916 | 0.8797 | 0.8333 | 89.2760 |
| 0.2823 | 3.45 | 4500 | 0.3309 | 0.9156 | 69.6805 | 0.8123 | 0.8887 | 0.8488 | 89.3922 |
| 0.2675 | 3.53 | 4600 | 0.3313 | 0.9158 | 69.6664 | 0.8168 | 0.8844 | 0.8492 | 89.4115 |
| 0.2642 | 3.6 | 4700 | 0.3297 | 0.9166 | 69.6888 | 0.8147 | 0.8904 | 0.8509 | 89.2954 |
| 0.2842 | 3.68 | 4800 | 0.3299 | 0.9162 | 69.6175 | 0.8000 | 0.8870 | 0.8413 | 89.5858 |
| 0.2646 | 3.76 | 4900 | 0.3294 | 0.9168 | 69.6792 | 0.7889 | 0.8827 | 0.8332 | 89.1986 |
| 0.2624 | 3.83 | 5000 | 0.3276 | 0.9171 | 69.6874 | 0.8047 | 0.8906 | 0.8455 | 89.2760 |
| 0.2647 | 3.91 | 5100 | 0.3282 | 0.9166 | 69.6530 | 0.7998 | 0.8823 | 0.8390 | 89.3535 |
| 0.2525 | 3.99 | 5200 | 0.3269 | 0.9168 | 69.7478 | 0.8062 | 0.8853 | 0.8439 | 89.3535 |
| 0.2417 | 4.06 | 5300 | 0.3311 | 0.9168 | 69.6945 | 0.7978 | 0.8877 | 0.8404 | 89.4503 |
| 0.2608 | 4.14 | 5400 | 0.3311 | 0.9169 | 69.7272 | 0.7998 | 0.8865 | 0.8409 | 89.3535 |
| 0.2408 | 4.22 | 5500 | 0.3308 | 0.9172 | 69.9450 | 0.8111 | 0.8864 | 0.8471 | 89.3341 |
| 0.2268 | 4.29 | 5600 | 0.3319 | 0.9176 | 69.8016 | 0.7923 | 0.8845 | 0.8359 | 89.0437 |
| 0.2158 | 4.37 | 5700 | 0.3315 | 0.9173 | 69.7748 | 0.8008 | 0.8849 | 0.8407 | 89.5083 |
| 0.2461 | 4.45 | 5800 | 0.3310 | 0.9174 | 69.9786 | 0.8030 | 0.8876 | 0.8432 | 89.2954 |
| 0.249 | 4.52 | 5900 | 0.3315 | 0.9176 | 69.9070 | 0.8066 | 0.8886 | 0.8456 | 89.4503 |
| 0.2428 | 4.6 | 6000 | 0.3312 | 0.9176 | 69.8172 | 0.8017 | 0.8859 | 0.8417 | 89.2760 |
| 0.2266 | 4.68 | 6100 | 0.3306 | 0.9178 | 69.8095 | 0.8016 | 0.8893 | 0.8431 | 89.1599 |
| 0.2266 | 4.75 | 6200 | 0.3305 | 0.9178 | 69.8328 | 0.8090 | 0.8902 | 0.8476 | 89.4309 |
| 0.2341 | 4.83 | 6300 | 0.3305 | 0.9180 | 69.8605 | 0.8101 | 0.8883 | 0.8474 | 89.4309 |
| 0.2226 | 4.91 | 6400 | 0.3305 | 0.9179 | 69.9342 | 0.8101 | 0.8896 | 0.8480 | 89.3728 |
| 0.209 | 4.98 | 6500 | 0.3301 | 0.9180 | 69.9161 | 0.8088 | 0.8878 | 0.8465 | 89.3728 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Backedman/DialoGPT-small-Anika | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-01-27T11:05:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.78 +/- 22.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Badr/model1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: course-distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8235294117647058
- name: F1
type: f1
value: 0.8779661016949152
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# course-distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 1.0204
- Accuracy: 0.8235
- F1: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1616 | 1.09 | 500 | 1.1943 | 0.8162 | 0.8718 |
| 0.2134 | 2.18 | 1000 | 1.0204 | 0.8235 | 0.8780 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition | [
"pytorch",
"tensorboard",
"wav2vec2",
"el",
"dataset:aesdd",
"transformers",
"audio",
"audio-classification",
"speech",
"license:apache-2.0"
]
| audio-classification | {
"architectures": [
"Wav2Vec2ForSpeechClassification"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- aihub_paper_summarization
metrics:
- rouge
model-index:
- name: pko-t5-small-finetuned-paper-4564652
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: aihub_paper_summarization
type: aihub_paper_summarization
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 4.874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pko-t5-small-finetuned-paper-4564652
This model is a fine-tuned version of [paust/pko-t5-small](https://huggingface.co/paust/pko-t5-small) on the aihub_paper_summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4922
- Rouge1: 4.874
- Rouge2: 1.0497
- Rougel: 4.8599
- Rougelsum: 4.854
- Gen Len: 18.9953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Bala/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1090.27 +/- 333.43
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Banshee/dialoGPT-luke-small | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Banshee/dialoGPT-small-luke | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T11:33:58Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: lisdusk1
---
### DuskfallAi Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
WARNING: This is trained largely on a small data set of our own art with a focus on the fact that our art, and any stable/midjourney outputs we included in this are related to our Dissoicative Identity Disorder. May actually retrain a larger data set later on.
Trained using the MultiModel Dreambooth App, sitting on a friday afternoon doing absolute squat.
Please DO NOT re-upload the sample pictures that it was trained on, except in the instance you are inspired to use img2img..
In which we dutifully ask you to spam the community section with your outputs.
DO NOT RESELL THIS MODEL, AS IT DOES HAVE A TON OF MY ART IN IT.
You may:
- Merge, use at will
- SELL your generations - it's a STYLE after all!
- Do credit when reuploading or merging if possible.
- DO USE in any merged, OR home based model - cause that's what it's for!
More information & output samples to all our models: [Civit AI -Duskfallcrew](https://civitai.com/user/duskfallcrew)
lisdusk1 (use that on your prompt)
lisdusk1 (use that on your prompt)

lisdusk2 (use that on your prompt)
lisdusk2 (use that on your prompt)
 |
Barbarameerr/Barbara | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T11:50:33Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_16_5.6e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7174
- Rouge1: 35.7621
- Rouge2: 12.8914
- Rougel: 23.6695
- Bleu1: 29.9954
- Bleu2: 17.513
- Bleu3: 10.317
- Bleu4: 5.8532
- Gen Len: 49.3147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:------:|:-------:|
| 1.9617 | 1.89 | 5000 | 2.6146 | 35.2828 | 12.4993 | 22.9894 | 29.2237 | 16.8919 | 9.7826 | 5.4461 | 48.0676 |
| 1.5272 | 3.78 | 10000 | 2.7174 | 35.7621 | 12.8914 | 23.6695 | 29.9954 | 17.513 | 10.317 | 5.8532 | 49.3147 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Barkavi/totto-t5-base-bert-score-121K | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 51 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64_6e-5_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_6e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Barleysack/AERoberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-27T11:56:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.95 +/- 16.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Barleysack/AERoberta2 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Uswa04/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Barleysack/klue-roberta-LSTM | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"QAWithLSTMModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-01-27T12:01:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.77 +/- 15.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Batsy24/DialoGPT-medium-Twilight_BellaBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-27T12:05:52Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxiv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Uswa04/taxiv1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Battlehooks/distilbert-base-uncased-finetuned-squad | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T12:20:38Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie_300
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie_300
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BatuhanYilmaz/bert-finetuned-nerxD | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: unlicense
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
t-art-scratch-v1.5
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('traiantitilincu/t-art-scratch-v1.5')
image = pipeline().images[0]
image
```
|
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28 | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"dataset:squad",
"arxiv:1910.01108",
"transformers",
"question-answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_file: model.pkl
widget:
structuredData:
angel_n_rounds:
- 0.0
- 0.0
- 0.0
pre_seed_n_rounds:
- 0.0
- 0.0
- 0.0
seed_funding:
- 1250000.0
- 800000.0
- 8000000.0
seed_n_rounds:
- 1.0
- 3.0
- 1.0
time_first_funding:
- 1270.0
- 1856.0
- 689.0
time_till_series_a:
- 1455.0
- 1667.0
- 1559.0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------|----------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('min_max_scaler', MinMaxScaler(),<br /> ['time_first_funding', 'seed_funding',<br /> 'time_till_series_a'])])), ('model', LogisticRegression(penalty='none', random_state=0))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('min_max_scaler', MinMaxScaler(),<br /> ['time_first_funding', 'seed_funding',<br /> 'time_till_series_a'])]) |
| model | LogisticRegression(penalty='none', random_state=0) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('min_max_scaler', MinMaxScaler(), ['time_first_funding', 'seed_funding', 'time_till_series_a'])] |
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__min_max_scaler | MinMaxScaler() |
| transformation__min_max_scaler__clip | False |
| transformation__min_max_scaler__copy | True |
| transformation__min_max_scaler__feature_range | (0, 1) |
| model__C | 1.0 |
| model__class_weight | |
| model__dual | False |
| model__fit_intercept | True |
| model__intercept_scaling | 1 |
| model__l1_ratio | |
| model__max_iter | 100 |
| model__multi_class | auto |
| model__n_jobs | |
| model__penalty | none |
| model__random_state | 0 |
| model__solver | lbfgs |
| model__tol | 0.0001 |
| model__verbose | 0 |
| model__warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "โธ";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "โพ";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('min_max_scaler',MinMaxScaler(),['time_first_funding','seed_funding','time_till_series_a'])])),('model', LogisticRegression(penalty='none', random_state=0))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('min_max_scaler',MinMaxScaler(),['time_first_funding','seed_funding','time_till_series_a'])])),('model', LogisticRegression(penalty='none', random_state=0))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('min_max_scaler', MinMaxScaler(),['time_first_funding', 'seed_funding','time_till_series_a'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">min_max_scaler</label><div class="sk-toggleable__content"><pre>['time_first_funding', 'seed_funding', 'time_till_series_a']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">MinMaxScaler</label><div class="sk-toggleable__content"><pre>MinMaxScaler()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(penalty='none', random_state=0)</pre></div></div></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# model_card_authors
jirko
# model_description
just the temporal regression with reduced input features
|
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T12:48:38Z | ---
license: apache-2.0
datasets:
- gsm8k
language:
- en
--- |
Baybars/debateGPT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model Stereo0001/FDDF is restricted and you are not in the authorized list. Visit https://huggingface.co/Stereo0001/FDDF to ask for access. |
Baybars/wav2vec2-xls-r-300m-cv8-turkish | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kobart_64_1e-4_datav2_min30_lp5.0_temperature1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_64_1e-4_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BeIR/sparta-msmarco-distilbert-base-v1 | [
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2009.13013",
"arxiv:2104.08663",
"transformers"
]
| feature-extraction | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 106 | 2023-01-27T13:04:31Z | ---
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
--- |
BearThreat/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.80 +/- 0.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bee-Garbs/DialoGPT-cartman-small | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
---
https://civitai.com/models/5415/cornflower-stylized-anime-model |
Begimay/Task | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
task: reinforcement-learning
library_name: ml-agents
tags:
- ML-Agents-SoccerTwos
- reinforcement-learning
--- |
BenDavis71/GPT-2-Finetuning-AIRaid | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-27T13:49:20Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.3 | 100 | 0.3653 |
| No log | 2.6 | 200 | 0.4152 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Benicio/t5-small-finetuned-en-to-ro | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T15:55:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: beit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit
This model is a fine-tuned version of [NTQAI/pedestrian_age_recognition](https://huggingface.co/NTQAI/pedestrian_age_recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 40
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Betaniaolivo/Foto | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: final_bart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_bart
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6848
- Rouge1: 35.7722
- Rouge2: 12.5127
- Rougel: 23.3002
- Rdass: 0.6248
- Bleu1: 30.5261
- Bleu2: 17.6264
- Bleu3: 10.3974
- Bleu4: 5.4348
- Gen Len: 53.47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rdass | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:------:|:-------:|:-------:|:-------:|:------:|:-------:|
| 2.1542 | 1.5 | 1000 | 2.7491 | 33.5554 | 11.2371 | 22.006 | 0.6093 | 27.9938 | 15.5354 | 8.2494 | 4.42 | 50.08 |
| 2.0071 | 2.99 | 2000 | 2.6813 | 35.0501 | 12.2759 | 22.6669 | 0.6155 | 29.6866 | 17.1396 | 9.7016 | 5.3559 | 54.04 |
| 1.8694 | 4.49 | 3000 | 2.6848 | 35.7722 | 12.5127 | 23.3002 | 0.6248 | 30.5261 | 17.6264 | 10.3974 | 5.4348 | 53.47 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Bharathdamu/wav2vec2-model-hindi-stt | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T14:24:42Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DAL12/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bhuvana/t5-base-spellchecker | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 93 | 2023-01-27T14:30:56Z | ---
tags:
- generated_from_trainer
model-index:
- name: digikala_products_parsbert_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# digikala_products_parsbert_model
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 7.8477 |
| No log | 2.0 | 50 | 7.0014 |
| No log | 3.0 | 75 | 6.3235 |
| No log | 4.0 | 100 | 5.6651 |
| No log | 5.0 | 125 | 4.9101 |
| No log | 6.0 | 150 | 4.2448 |
| No log | 7.0 | 175 | 3.8656 |
| No log | 8.0 | 200 | 3.4329 |
| No log | 9.0 | 225 | 3.3204 |
| No log | 10.0 | 250 | 3.0740 |
| No log | 11.0 | 275 | 2.9556 |
| No log | 12.0 | 300 | 2.9938 |
| No log | 13.0 | 325 | 2.8620 |
| No log | 14.0 | 350 | 2.7879 |
| No log | 15.0 | 375 | 2.8619 |
| No log | 16.0 | 400 | 2.8521 |
| No log | 17.0 | 425 | 2.7920 |
| No log | 18.0 | 450 | 2.8494 |
| No log | 19.0 | 475 | 2.8209 |
| 4.1477 | 20.0 | 500 | 2.8471 |
| 4.1477 | 21.0 | 525 | 2.8478 |
| 4.1477 | 22.0 | 550 | 2.7904 |
| 4.1477 | 23.0 | 575 | 2.7961 |
| 4.1477 | 24.0 | 600 | 2.7494 |
| 4.1477 | 25.0 | 625 | 2.8250 |
| 4.1477 | 26.0 | 650 | 2.7439 |
| 4.1477 | 27.0 | 675 | 2.7539 |
| 4.1477 | 28.0 | 700 | 2.7635 |
| 4.1477 | 29.0 | 725 | 2.7742 |
| 4.1477 | 30.0 | 750 | 2.7711 |
| 4.1477 | 31.0 | 775 | 2.8243 |
| 4.1477 | 32.0 | 800 | 2.7547 |
| 4.1477 | 33.0 | 825 | 2.7690 |
| 4.1477 | 34.0 | 850 | 2.7178 |
| 4.1477 | 35.0 | 875 | 2.7554 |
| 4.1477 | 36.0 | 900 | 2.7701 |
| 4.1477 | 37.0 | 925 | 2.7953 |
| 4.1477 | 38.0 | 950 | 2.8062 |
| 4.1477 | 39.0 | 975 | 2.7637 |
| 2.772 | 40.0 | 1000 | 2.7675 |
| 2.772 | 41.0 | 1025 | 2.7953 |
| 2.772 | 42.0 | 1050 | 2.8003 |
| 2.772 | 43.0 | 1075 | 2.7484 |
| 2.772 | 44.0 | 1100 | 2.7292 |
| 2.772 | 45.0 | 1125 | 2.7287 |
| 2.772 | 46.0 | 1150 | 2.6998 |
| 2.772 | 47.0 | 1175 | 2.7381 |
| 2.772 | 48.0 | 1200 | 2.7196 |
| 2.772 | 49.0 | 1225 | 2.7450 |
| 2.772 | 50.0 | 1250 | 2.7293 |
| 2.772 | 51.0 | 1275 | 2.7216 |
| 2.772 | 52.0 | 1300 | 2.7981 |
| 2.772 | 53.0 | 1325 | 2.7405 |
| 2.772 | 54.0 | 1350 | 2.7895 |
| 2.772 | 55.0 | 1375 | 2.7092 |
| 2.772 | 56.0 | 1400 | 2.7977 |
| 2.772 | 57.0 | 1425 | 2.7012 |
| 2.772 | 58.0 | 1450 | 2.7752 |
| 2.772 | 59.0 | 1475 | 2.7469 |
| 2.742 | 60.0 | 1500 | 2.7205 |
| 2.742 | 61.0 | 1525 | 2.7752 |
| 2.742 | 62.0 | 1550 | 2.6942 |
| 2.742 | 63.0 | 1575 | 2.6916 |
| 2.742 | 64.0 | 1600 | 2.8169 |
| 2.742 | 65.0 | 1625 | 2.7256 |
| 2.742 | 66.0 | 1650 | 2.6844 |
| 2.742 | 67.0 | 1675 | 2.7544 |
| 2.742 | 68.0 | 1700 | 2.7083 |
| 2.742 | 69.0 | 1725 | 2.7286 |
| 2.742 | 70.0 | 1750 | 2.7492 |
| 2.742 | 71.0 | 1775 | 2.6946 |
| 2.742 | 72.0 | 1800 | 2.7395 |
| 2.742 | 73.0 | 1825 | 2.7597 |
| 2.742 | 74.0 | 1850 | 2.7953 |
| 2.742 | 75.0 | 1875 | 2.7468 |
| 2.742 | 76.0 | 1900 | 2.7274 |
| 2.742 | 77.0 | 1925 | 2.7507 |
| 2.742 | 78.0 | 1950 | 2.7174 |
| 2.742 | 79.0 | 1975 | 2.7233 |
| 2.7185 | 80.0 | 2000 | 2.7405 |
| 2.7185 | 81.0 | 2025 | 2.7781 |
| 2.7185 | 82.0 | 2050 | 2.7534 |
| 2.7185 | 83.0 | 2075 | 2.7588 |
| 2.7185 | 84.0 | 2100 | 2.7469 |
| 2.7185 | 85.0 | 2125 | 2.6929 |
| 2.7185 | 86.0 | 2150 | 2.6785 |
| 2.7185 | 87.0 | 2175 | 2.7098 |
| 2.7185 | 88.0 | 2200 | 2.7622 |
| 2.7185 | 89.0 | 2225 | 2.7726 |
| 2.7185 | 90.0 | 2250 | 2.7144 |
| 2.7185 | 91.0 | 2275 | 2.7877 |
| 2.7185 | 92.0 | 2300 | 2.7665 |
| 2.7185 | 93.0 | 2325 | 2.7794 |
| 2.7185 | 94.0 | 2350 | 2.6788 |
| 2.7185 | 95.0 | 2375 | 2.7398 |
| 2.7185 | 96.0 | 2400 | 2.7277 |
| 2.7185 | 97.0 | 2425 | 2.8053 |
| 2.7185 | 98.0 | 2450 | 2.7537 |
| 2.7185 | 99.0 | 2475 | 2.7467 |
| 2.7057 | 100.0 | 2500 | 2.7191 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Biasface/DDDC2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-27T14:33:51Z | ---
license: cc-by-nc-4.0
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
# Anmokomergetest1
Anmokomergetest1 is a **merged** stable diffusion model specializes in **anime**, and can generate **vivid** and **detailed** also **wide range** anime pictures in **high quality** even with **simple descriptions**.
It is an authorized copy of **@่ทฏ่ดนไธๅค private model** in a Chinese AI community **NovelAI ไธญๆ้ข้**, and merged from 3 models: Anything-V3.0(Linaqruf), momoko-e(anonymous), mergetest1(another private model from the community).
Better performace at **1k original resolution**, with **Hires.fix** it can produce **larger pictures** with more logical structures.
**Provide in 2 versions**, uncompressed checkpoint and compressed safetensors. The **CKPT** needs to work with individual VAE or it wouldn't vivid, **better for merging**, and the **safetensors** has baked in VAE, **better for generating pictures**.
**No commercial usage!**
# Preview
*Hatsune Miku, upper body*

*Kagamine Rin, upper body*

*Hakurei Reimu, upper body*

*Tohsaka Rin, upper body*

*Misaka Mikoto, upper body*

*Yuuki Asuna, upper body*

# Usage
Use as normal stable diffusion model package v1.x, no external yaml config is needed.
**Recommand settings: Steps: 12-28, Sampler: DPM++ SDE Karras, CFG scale: 5-11, Resolution: 1024x1024**
# Tags
Positives, less is more. Negatives, less is more, too.
**Recommand Positives for Quality Control:** *masterpiece, best quality* The other prompts as you like.
**Recommand Negatives:** *lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artists name*
# Bias
**Low quality at too low or too high resolutions!!!**
Bad eyes at low resolutions. Malformed limbs often appear. Training set Contain copyright images.
# Formula
**Round1** Anything-V3.0(Linaqruf)+momoko-e(anonymous) sum rate0.25
**Round2** ()+mergetest1(Private, from a Chinese AI community NovelAI ไธญๆ้ข้) sum rate0.5
Get Anmokomergetest1.ckpt, which better for further merges.
After baked in vae-ft-mse-840000-ema-pruned(StabilityAI) VAE, pruned ema, compressed to FP16, get Anmokomergetest1.safetensors. Better for generate images. |
BigBoy/model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T14:49:47Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/BertaMyWorda | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-27T14:50:32Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="EquinoxElahin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/BlankSlots | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 4 | 2023-01-27T14:55:06Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BigSalmon/Flowberta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | 2023-01-27T15:02:49Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Art-phys/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/FormalBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- endpoints-template
---
# FORK of FLAN-T5 XXL
> This is a fork of google/flan-t5-xxl implementing a custom `handler.py` as an example for how to use t5-11b with inference-endpoints on a single NVIDIA A10G.
You can deploy the flan-t5-xxl with a [1-click](https://ui.endpoints.huggingface.co/new?repository=philschmid/flan-t5-xxl-sharded-fp16).
Since we are using the "quantized" version, we can switch our instance type to **"GPU [medium] ยท 1x Nvidia A10G"**.

# TL;DR
If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages.
As mentioned in the first few lines of the abstract :
> Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints,1 which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
**Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [T5 model card](https://huggingface.co/t5-large).
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English, Spanish, Japanese, Persian, Hindi, French, Chinese, Bengali, Gujarati, German, Telugu, Italian, Arabic, Polish, Tamil, Marathi, Malayalam, Oriya, Panjabi, Portuguese, Urdu, Galician, Hebrew, Korean, Catalan, Thai, Dutch, Indonesian, Vietnamese, Bulgarian, Filipino, Central Khmer, Lao, Turkish, Russian, Croatian, Swedish, Yoruba, Kurdish, Burmese, Malay, Czech, Finnish, Somali, Tagalog, Swahili, Sinhala, Kannada, Zhuang, Igbo, Xhosa, Romanian, Haitian, Estonian, Slovak, Lithuanian, Greek, Nepali, Assamese, Norwegian
- **License:** Apache 2.0
- **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=flan-t5)
- **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints)
- **Resources for more information:**
- [Research paper](https://arxiv.org/pdf/2210.11416.pdf)
- [GitHub Repo](https://github.com/google-research/t5x)
- [Hugging Face FLAN-T5 Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/t5)
|
BigSalmon/FormalBerta3 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-27T15:08:33Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: yarafa/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
BigSalmon/FormalRobertaa | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-27T15:08:51Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Art-phys/Taxi-v3_v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/GPT2HardArticleEasyArticle | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-27T15:20:56Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: robinsk8a/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
BigSalmon/GPTHeHe | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- image-to-image
- art
- artistic
- dreambooth
---
# spop style
This model features four different concepts: humans, outer space, forests, and landscapes in the specific style of SPOP: She-Ra and the Princesses of Power, the Dreamworks version.
This is a fine-tuned Stable Diffusion model, based on ```SD 1.5```.
The goal of this model is to capture the _style_ - not the individual characters featured in the series.
> ๐ **Disclaimer**: This is my favorite show. I won't go into that here but a lot of love went into this model.


## Model Usage
This model was trained on multiple concepts. Use the tokens below:
| Token | Description |
|-----------------------|--------------------------------------|
| ๐ค `dwspop style` | Uses concepts trained on people |
| ๐ `dwspop space` | Uses concepts trained on outer space |
| ๐ฒ `dwspop forest` | Uses concepts trained on forests |
| ๐ `dwspop landscape` | Uses concepts trained on landscapes |
### ๐ค dwspop style examples

This token is capable of handling multiple genders and uses `person` which can be then used for `woman`, `man`,
or `cat-like woman`, or even `lizard`, `dog`, `snoop dog`... it's awesome:
- ```a photo of a person in a forest, dwspop style```
- ```a photo of a woman floating in space, dwspop style```
- ```a photo of a man inside of a palace standing near a window, dwspop style```
โ Negative prompt: ```((out of focus body)), ((out of focus face)), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))```
### ๐ dwspop space examples

The space token is versatile when prompting, especially when generating galaxies and solar systems. This token is capable of handling different camera angles by desribing in your prompts as a `scene`.
- ```a scene of outer space with asteroids and rocks floating in space getting melted by a bright light, dwspop space```
- ```a scene of an outer space solar system with planets, stars and galaxies in the background, dwspop space```
- ```a scene of a planet in space with stars in the background, dwspop space```
โ Negative prompt: ```((out of focus face)), (((duplicate))), [out of frame], blurry, out of frame, ugly, blur, motion blur```
### ๐ฒ dwspop forest forest examples

The forest token is able to generate random forest scenes due to the regularization images that were used. When prompting, additional enviromental objects are supported, such as `crystals`, `rocks`, `flowers`, `cottage`. Finally, mix in time of day: `sunrise`, `dawn`, `sunset`, `evening`.
- ```a beautiful photo of a path in a forest with glowing lights and rocks and trees on either side of the path, dwspop forest```
- ```a forest during night time with a full moon in the sky, dynamic lighting, bright lights, dwspop forest```
- ```a scene of an entrance to a huge forest with pink flowers, dynamic lighting, bright lights, dwspop forest```
โ Negative prompt: ```((out of focus face)), (((duplicate))), [out of frame], blurry, out of frame, ugly, blur, motion blur```
### ๐ dwspop landscape examples:

The landscape token is primarly for landscapes but also supports a small percentage of architecture. Blending your prompts to have both an establishing shot of a landscape with architecture woven in and out is where this token shines.
- ```a scene of a weapon shop that has many different swords hanging on the wall and arrows and staffs inside of barrels, a small shop with a tent in the background, dwspop landscape```
- ```a scene of a village with a waterfall, wooden stairs leading to the top of trees, dynamic lighting, dwspop landscape```
- ```a beautiful scene of a palace with wide doors and a fountain and flowers near a window, sunset, dynamic lighting, dwspop landscape```
โ Negative prompt: ```((out of focus face)), (((duplicate))), [out of frame], blurry, out of frame, ugly, blur, motion blur```
---
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
see [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
Export the model:
- [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx)
- [MPS](https://huggingface.co/docs/diffusers/optimization/mps)
- [FLAX/JAX](https://huggingface.co/blog/stable_diffusion_jax)
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "zuleo/spop"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Perfectly-centered close up portrait-photograph of a person, marketplace in the background, sunrise, dwspop style"
image = pipe(prompt).images[0]
image.save("./spop_person.png")
```
---

## ๐
text2img Range Grids
It's always great to get a visual of what's going on with sampler, CFG scale, and other settings. See the examples below and tune them to your liking.
### Sampler
Using different samplers can produce different results. My favorites are using `DPM++ 2S a Karras`, `DPM++ SDE Karras`, `DPM adaptive` for cartoon scenes.
> ๐ฅ **DPM Adaptive**: DPM Adaptive does not use steps. This sampler is fixed depending on the CFG scale and additional configurations.
View the XY grids below for details:
- Space: https://huggingface.co/zuleo/spop/resolve/main/images/dwspop_space_grid.png
- Forest: https://huggingface.co/zuleo/spop/resolve/main/images/dwspop_forest_grid.png
- Landscape: https://huggingface.co/zuleo/spop/resolve/main/images/dwspop_landscape_grid.png
### Sampling Steps for person
Values between `25 - 38` is a good range for _most_ samplers but not all. See the Sampling Steps grid with each sampler below:
[Sampling Steps Grid](https://huggingface.co/zuleo/spop/resolve/main/images/sampler_grid.png)
### CFG Scale
Values between `7 - 11` is a good range. See the CFG Scale grid:
[CFG Scale Grid](https://huggingface.co/zuleo/spop/resolve/main/images/cfg_grid.png)
---
## ๐
img2img Grids
This model works with img2img with a balanced configuration between `CFG scale`, `denoising`, and adding more detail with `sampling steps`.
### Denoising & Steps
Steps: `39 - 46`, Denoising: `0.49 - 0.6`:
- [Denoising & Steps Grid](https://huggingface.co/zuleo/spop/resolve/main/images/img2img_steps_denoising.png)
### Samplers & Denoising
Samplers: `all`, Denoising: `0.6 - 0.7`:
- [Samplers & Denoising Grid](https://huggingface.co/zuleo/spop/resolve/main/images/img2img_denoise_samplers.png)
### Samplers & CFG Scale
Samplers: `all`, CFG Scale: `7.0 - 11.0`:
- [Samplers & CFG Scale Grid](https://huggingface.co/zuleo/spop/resolve/main/images/img2img_sampler_cfg.png)
---
## ๐ Regularization images
If you would like to use the regularization images from this training, see the datasets below:
- `space`: https://huggingface.co/datasets/3ee/regularization-space
- `forest`: https://huggingface.co/datasets/3ee/regularization-forest
- `landscape`: https://huggingface.co/datasets/3ee/regularization-landscape
---
โ If you enjoy this model, buy me a coffee [](https://ko-fi.com/3eegames)
--- |
BigSalmon/GPTNeo350MInformalToFormalLincoln | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-27T15:23:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3_v1_1M
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Art-phys/Taxi-v3_v1_1M", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln2 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-27T15:31:04Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 170 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 170,
"warmup_steps": 17,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
BigSalmon/GPTNeo350MInformalToFormalLincoln3 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-27T15:33:18Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kaiseaanahuaaa-weird_on3/1674833638204/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1498169306604982274/r07rBzEP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1595644934865924096/VqPXH3gJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Weird One & Kartik๐ฅ</div>
<div style="text-align: center; font-size: 14px;">@kaiseaanahuaaa-weird_on3</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Weird One & Kartik๐ฅ.
| Data | Weird One | Kartik๐ฅ |
| --- | --- | --- |
| Tweets downloaded | 1178 | 3242 |
| Retweets | 132 | 2645 |
| Short tweets | 81 | 375 |
| Tweets kept | 965 | 222 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s98pr89/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kaiseaanahuaaa-weird_on3's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/u4mjkfu4) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/u4mjkfu4/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kaiseaanahuaaa-weird_on3')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
BigSalmon/GPTNeo350MInformalToFormalLincoln4 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-01-27T15:34:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.80 +/- 11.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BigSalmon/GPTNeo350MInformalToFormalLincoln5 | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"has_space"
]
| text-generation | {
"architectures": [
"GPTNeoForCausalLM"
],
"model_type": "gpt_neo",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | 2023-01-27T15:40:40Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: deepset_deberta-v3-large-squad2_1.23e-04_9.40e-02_8_512_7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepset_deberta-v3-large-squad2_1.23e-04_9.40e-02_8_512_7
This model is a fine-tuned version of [deepset/deberta-v3-large-squad2](https://huggingface.co/deepset/deberta-v3-large-squad2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012263579392223837
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 7
- total_train_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.09401721879837398
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.25.0
- Pytorch 1.10.0
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BigSalmon/InfillFormalLincoln | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-27T15:53:05Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# KerasCV Stable Diffusion in Diffusers ๐งจ๐ค
The pipeline contained in this repository was created using [this Space](https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers). The purpose is to convert the KerasCV Stable Diffusion weights in a way that is compatible with Diffusers. This allows users to fine-tune using KerasCV and use the fine-tuned weights in Diffusers taking advantage of its nifty features (like schedulers, fast attention, etc.).
|
BigSalmon/MrLincoln11 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2023-01-27T16:39:48Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bert-finetuned-ner-ubb-conll-endava-only-misc-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-ubb-conll-endava-only-misc-v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0190
- Validation Loss: 0.0310
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1365, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2091 | 0.0391 | 0 |
| 0.0336 | 0.0322 | 1 |
| 0.0190 | 0.0310 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BigSalmon/Rowerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-27T17:40:16Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
library_name: ultralytics
library_version: 8.0.21
inference: false
model-index:
- name: uisikdag/fogsmog_v8
results:
- task:
type: image-classification
metrics:
- type: accuracy
value: 0.8375 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 1 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="uisikdag/fogsmog_v8" src="https://huggingface.co/uisikdag/fogsmog_v8/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['fog', 'smog']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.23 ultralytics==8.0.21
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('uisikdag/fogsmog_v8')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
|
Bimal/my_bot_model | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-27T18:13:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.31 +/- 17.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Brunomezenga/NN | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T21:16:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo-mlp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.32 +/- 23.98
name: mean_reward
verified: false
---
# **ppo-mlp** Agent playing **LunarLander-v2**
This is a trained model of a **ppo-mlp** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Brykee/DialoGPT-medium-Morty | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2023-01-27T21:47:23Z | ---
tags:
- image-classification
- timm
library_tag: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for davit_base.msft_in1k
A DaViT image classification model. Trained on ImageNet-1k by paper authors.
Thanks to [Fredo Guan](https://github.com/fffffgggg54) for bringing the classification backbone to `timm`.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.0
- GMACs: 15.5
- Activations (M): 40.7
- Image size: 224 x 224
- **Papers:**
- DaViT: Dual Attention Vision Transformers: https://arxiv.org/abs/2204.03645
- **Original:** https://github.com/dingmyu/davit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('davit_base.msft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'davit_base.msft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 768, 7, 7]
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'davit_base.msft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
|model |top1 |top1_err|top5 |top5_err|param_count|img_size|crop_pct|interpolation|
|---------------------|------|--------|------|--------|-----------|--------|--------|-------------|
|davit_base.msft_in1k |84.634|15.366 |97.014|2.986 |87.95 |224 |0.95 |bicubic |
|davit_small.msft_in1k|84.25 |15.75 |96.94 |3.06 |49.75 |224 |0.95 |bicubic |
|davit_tiny.msft_in1k |82.676|17.324 |96.276|3.724 |28.36 |224 |0.95 |bicubic |
## Citation
```bibtex
@inproceedings{ding2022davit,
title={DaViT: Dual Attention Vision Transformer},
author={Ding, Mingyu and Xiao, Bin and Codella, Noel and Luo, Ping and Wang, Jingdong and Yuan, Lu},
booktitle={ECCV},
year={2022},
}
```
|
Bubb-les/DisloGPT-medium-HarryPotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xnli
model-index:
- name: bert-base-arabic-camelbert-msa-sixteenth-xnli-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-arabic-camelbert-msa-sixteenth-xnli-finetuned
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth) on the xnli dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6446
- eval_accuracy: 0.7295
- eval_f1: 0.7286
- eval_runtime: 72.1081
- eval_samples_per_second: 69.479
- eval_steps_per_second: 69.479
- epoch: 1.0
- step: 12271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
BumBelDumBel/ZORK_AI_SCIFI | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
Finetuning_Model_1
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('datboi223/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Buntan/BuntanAI | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-27T22:08:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3173 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 9519,
"warmup_steps": 952,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: UNEDMediaBiasTeam_at_SemEval23_Task3_Subtask3_PRE_BABE_dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UNEDMediaBiasTeam_at_SemEval23_Task3_Subtask3_PRE_BABE_dataset
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2188
- F1: 0.5660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5684 | 1.0 | 24 | 0.8998 | 0.4355 |
| 0.3 | 2.0 | 48 | 0.9073 | 0.4625 |
| 0.3634 | 3.0 | 72 | 0.8815 | 0.4868 |
| 0.2344 | 4.0 | 96 | 0.9457 | 0.4848 |
| 0.1712 | 5.0 | 120 | 0.9737 | 0.4945 |
| 0.148 | 6.0 | 144 | 1.0416 | 0.4896 |
| 0.0662 | 7.0 | 168 | 1.1345 | 0.4838 |
| 0.046 | 8.0 | 192 | 1.0935 | 0.5353 |
| 0.0398 | 9.0 | 216 | 1.1288 | 0.5376 |
| 0.0563 | 10.0 | 240 | 1.2188 | 0.5660 |
| 0.0449 | 11.0 | 264 | 1.2390 | 0.5160 |
| 0.0472 | 12.0 | 288 | 1.3779 | 0.5069 |
| 0.0122 | 13.0 | 312 | 1.4218 | 0.5442 |
| 0.0037 | 14.0 | 336 | 1.4859 | 0.5432 |
| 0.0557 | 15.0 | 360 | 1.5124 | 0.5510 |
| 0.0038 | 16.0 | 384 | 1.5364 | 0.5542 |
| 0.0043 | 17.0 | 408 | 1.5484 | 0.5589 |
| 0.0022 | 18.0 | 432 | 1.6063 | 0.5554 |
| 0.0044 | 19.0 | 456 | 1.6013 | 0.5268 |
| 0.0023 | 20.0 | 480 | 1.6161 | 0.4802 |
| 0.002 | 21.0 | 504 | 1.6622 | 0.4783 |
| 0.0016 | 22.0 | 528 | 1.6737 | 0.4812 |
| 0.002 | 23.0 | 552 | 1.6776 | 0.5250 |
| 0.0019 | 24.0 | 576 | 1.7027 | 0.4800 |
| 0.0015 | 25.0 | 600 | 1.6897 | 0.5211 |
| 0.0018 | 26.0 | 624 | 1.6982 | 0.5211 |
| 0.0015 | 27.0 | 648 | 1.7174 | 0.4781 |
| 0.0019 | 28.0 | 672 | 1.7269 | 0.4781 |
| 0.0016 | 29.0 | 696 | 1.7323 | 0.5133 |
| 0.0304 | 30.0 | 720 | 1.7265 | 0.5172 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 18 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Goodreads_Books_Reviews_ALBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Goodreads_Books_Reviews_ALBERT
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8571
- F1: 0.6190
- Accuracy: 0.6441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.8531 | 1.0 | 14691 | 0.8571 | 0.6190 | 0.6441 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 71 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Goodreads_Books_Reviews_BERT_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Goodreads_Books_Reviews_BERT_50
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9665
- F1: 0.6343
- Accuracy: 0.6343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 0.9881 | 1.0 | 4219 | 0.9548 | 0.6185 | 0.6183 |
| 0.8735 | 2.0 | 8438 | 0.9506 | 0.6278 | 0.626 |
| 0.7147 | 3.0 | 12657 | 0.9665 | 0.6343 | 0.6343 |
| 0.6294 | 4.0 | 16876 | 1.0249 | 0.6341 | 0.6334 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"has_space"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19,850 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1165
- Rouge2 Precision: 0.8267
- Rouge2 Recall: 0.7345
- Rouge2 Fmeasure: 0.7706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.2196 | 1.0 | 3569 | 0.1709 | 0.7843 | 0.6963 | 0.7307 |
| 0.1767 | 2.0 | 7138 | 0.1480 | 0.8031 | 0.7118 | 0.7477 |
| 0.1614 | 3.0 | 10707 | 0.1353 | 0.8115 | 0.72 | 0.7559 |
| 0.148 | 4.0 | 14276 | 0.1287 | 0.8165 | 0.7244 | 0.7604 |
| 0.1406 | 5.0 | 17845 | 0.1242 | 0.8207 | 0.7283 | 0.7646 |
| 0.1337 | 6.0 | 21414 | 0.1209 | 0.8238 | 0.7313 | 0.7676 |
| 0.1296 | 7.0 | 24983 | 0.1193 | 0.8252 | 0.7329 | 0.7691 |
| 0.1271 | 8.0 | 28552 | 0.1177 | 0.825 | 0.7329 | 0.7691 |
| 0.1222 | 9.0 | 32121 | 0.1167 | 0.8262 | 0.7341 | 0.7702 |
| 0.1229 | 10.0 | 35690 | 0.1165 | 0.8267 | 0.7345 | 0.7706 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26 | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 45 | null | Access to model SomyaDnD/DnDDragonborn is restricted and you are not in the authorized list. Visit https://huggingface.co/SomyaDnD/DnDDragonborn to ask for access. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6 | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | null |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- image-classification
- pytorch
- awesome-yolov8-models
library_name: ultralytics
library_version: 8.0.21
inference: false
datasets:
- keremberke/chest-xray-classification
model-index:
- name: keremberke/yolov8s-chest-xray-classification
results:
- task:
type: image-classification
dataset:
type: keremberke/chest-xray-classification
name: chest-xray-classification
split: validation
metrics:
- type: accuracy
value: 0.94158 # min: 0.0 - max: 1.0
name: top1 accuracy
- type: accuracy
value: 1 # min: 0.0 - max: 1.0
name: top5 accuracy
---
<div align="center">
<img width="640" alt="keremberke/yolov8s-chest-xray-classification" src="https://huggingface.co/keremberke/yolov8s-chest-xray-classification/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['NORMAL', 'PNEUMONIA']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.23 ultralytics==8.0.21
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, postprocess_classify_output
# load model
model = YOLO('keremberke/yolov8s-chest-xray-classification')
# set model parameters
model.overrides['conf'] = 0.25 # model confidence threshold
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].probs) # [0.1, 0.2, 0.3, 0.4]
processed_result = postprocess_classify_output(model, result=results[0])
print(processed_result) # {"cat": 0.4, "dog": 0.6}
```
**More models available at: [awesome-yolov8-models](https://yolov8.xyz)** |
CAMeL-Lab/bert-base-arabic-camelbert-mix-poetry | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:1905.05700",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 987.17 +/- 78.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 62 | 2023-01-27T23:09:27Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: bald-or-not
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.30337077379226685
---
# bald or not?
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bald

|
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 132 | null | Access to model kazimurtaza/homebase is restricted and you are not in the authorized list. Visit https://huggingface.co/kazimurtaza/homebase to ask for access. |
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa | [
"pytorch",
"tf",
"bert",
"token-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,862 | 2023-01-27T23:19:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.65 +/- 22.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 855 | null | Access to model Pedrambbk/T5-base-poll-generation is restricted and you are not in the authorized list. Visit https://huggingface.co/Pedrambbk/T5-base-poll-generation to ask for access. |
CAMeL-Lab/bert-base-arabic-camelbert-mix | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"Arabic",
"Dialect",
"Egyptian",
"Gulf",
"Levantine",
"Classical Arabic",
"MSA",
"Modern Standard Arabic",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20,880 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.0+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.97 +/- 0.57
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-half | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | Access to model dandyonrocks/sd1.5_vibs2 is restricted and you are not in the authorized list. Visit https://huggingface.co/dandyonrocks/sd1.5_vibs2 to ask for access. |
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 574 | null | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.86 +/- 0.40
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CAMeL-Lab/bert-base-arabic-camelbert-msa | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,967 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: fogsmog_hfclass
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.91
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fogsmog_hfclass
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3700
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6766 | 1.0 | 25 | 0.6299 | 0.795 |
| 0.3444 | 2.0 | 50 | 0.3701 | 0.8625 |
| 0.2456 | 3.0 | 75 | 0.2988 | 0.885 |
| 0.1402 | 4.0 | 100 | 0.3076 | 0.905 |
| 0.1275 | 5.0 | 125 | 0.4505 | 0.8525 |
| 0.0909 | 6.0 | 150 | 0.3739 | 0.8825 |
| 0.0792 | 7.0 | 175 | 0.3642 | 0.885 |
| 0.0482 | 8.0 | 200 | 0.3812 | 0.885 |
| 0.0451 | 9.0 | 225 | 0.3290 | 0.9 |
| 0.0526 | 10.0 | 250 | 0.4004 | 0.8825 |
| 0.0575 | 11.0 | 275 | 0.2842 | 0.925 |
| 0.0457 | 12.0 | 300 | 0.3952 | 0.895 |
| 0.0505 | 13.0 | 325 | 0.4411 | 0.885 |
| 0.0324 | 14.0 | 350 | 0.4185 | 0.8925 |
| 0.0354 | 15.0 | 375 | 0.3347 | 0.9025 |
| 0.0443 | 16.0 | 400 | 0.2949 | 0.915 |
| 0.0305 | 17.0 | 425 | 0.3603 | 0.905 |
| 0.0234 | 18.0 | 450 | 0.3858 | 0.8875 |
| 0.0219 | 19.0 | 475 | 0.3541 | 0.91 |
| 0.0284 | 20.0 | 500 | 0.3700 | 0.91 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CAUKiel/JavaBERT | [
"pytorch",
"safetensors",
"bert",
"fill-mask",
"code",
"arxiv:2110.10404",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388 | 2023-01-27T23:58:24Z | ---
license: gpl-3.0
tags:
- vision
- image-classification
- Pokรฉmon
widget:
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/1.jpg
example_title: Bulbasaur
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/2.jpg
example_title: Charizard
- src: https://huggingface.co/torresflo/Poke-Model/resolve/main/examples/3.jpg
example_title: Blastoise
---
# Pokรฉ Model
Pokรฉ Model is a Pokรฉmon Classifier created to be used with [Pokรฉdex AI](https://github.com/torresflo/Pokedex-AI). It is a fine-tuned model of google/vit-base-patch16-224 to classify Pokรฉmon of the first generation.
More information on how to generate and how to use the model can be found on this [dedicated repository](https://github.com/torresflo/Poke-Model).
## License
Distributed under the GNU General Public License v3.0. See [here](https://www.gnu.org/licenses/gpl-3.0.en.html) for more information.
|
CBreit00/DialoGPT_small_Rick | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc
---
<img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/Title.png" align="left" style="width: 100%;"/>
```
```
# ๐ฐ๐ท K-์ ๋
๋ฏน์ค!
์์ ํ๊ตญ ์ค๊ณ ๋ฉ์ด ํ์ํ ๋ ์ฌ์ฉํ๋ ๋น๋ฒ์์ค!
## โ๏ธ ํน์ง
- ์ํผํ๊ณ ์์ ์ฌ๊ณ ์ ์ธ๋ชจ์ ํ์จ์ด ์ข์
- ๋ฐ์ค์ฌ (20 ~ 30% ๋ฐํฌ๋ฅด๋ฉ)
- ์ธ๊ตญ์ธ๋ ๊ด์ฐฎ๊ฒ ๋ฝํ
- ์ฑ์ํจ๊ณผ๋ ๊ฑฐ๋ฆฌ๊ฐ ๋ฉ์ด์!
<div style="width:auto; height:auto;">
<img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/K04.png" align="left" style="width:33.75%; margin: 1.5625%;"/>
<img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/K02.png" align="left" style="width:60%; margin: 1.5625%;"/>
<img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/K01.png" align="left" style="width:60%; margin: 1.5625%;"/>
<img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/K03.png" align="left" style="width:33.75%; margin: 1.5625%;"/>
</div>
```
ํ๋
๊ฐ ๋ฏธ๋๋ค!
```
<p><span><a style="color: rgb(255, 49, 47);" href="https://www.instagram.com/ai_unite/" id="isPasted">(ํด๋ฆญ) ์ธ์คํ๊ทธ๋จ์์ ๋ ๊ตฌ๊ฒฝํ์ธ์!</a></span></p>
## ๐จ๏ธ ์ฌ์ฉ๋ฒ
- Files and versions > Stable-diffusion ํด๋ ckpt ํ์ผ ํ์ธ
- ํ์ผ ๋๊ฐ๋ฅผ ๋ณธ์ธ WebUI Stable-diffusion ํด๋์ ๋ค์ด๋ก๋
- ๋ชจ๋ธ(์ฒดํฌํฌ์ธํธ) ๋ก๋
- K20์ผ๋ก ์๋ ํ์ ํ **๊ถ์ฅ๊ฐ**์ ๋น์ทํ๊ฒ ์ธํ
ํ๊ณ t2i ๊ทธ๋ฆผ ์์ฑ
- ๋ง์์ ๋๋ ๊ทธ๋ฆผ์ด ๋์ค๋ฉด K10์ ์ด์ฉํ i2i๋ก ์ด์ด ๋ค๋ฌ์ด์ฃผ๊ธฐ!
## ๐ K10 โ๏ธ K20 ์ฐจ์ด์
- K10์ ๊ทน์ค์ฌ, K20์ ๋ฐ์ค์ฌ์ ๊ฐ๊น์.
- K20์ ๊ทธ๋ฆผ์ ์ต์ด๋ก t2i ํ ๋ ์์ ์ ์ผ๋ก ๊ฒฐ๊ณผ๋ฌผ์ด ๋์ด.
- K10์ ๊ธฐ์กด ๋ง๋ค์ด์ง ๊ทธ๋ฆผ์ i2iํ ๋ ๊ท์ฝ๊ฒ ์ค์ฌํํด์ค.
- ๋จ์ ๋ฐํฌ๋ฅด๋ฉ ๋น์จ ์ฐจ์ด! ๋ ๋ค ์ฐ๋ฉด ์ข์ผ๋๊น ๋ ๋ค ๋ฐ์๊ฒ!
- (์ฉ๋๋๋ฌธ์ ํ๋๋ง ์จ์ผ๋๋ฉด K10 ์ถ์ฒ)
## ๐ K10 DP ?
- ๊ฐ์ข
์ธ๋ฌผํ Lora๋ฅผ ํฉ์ณ์ ์ฌ์ฉํ๊ณ ์ถ์ ๋ ์ด๊ฑธ๋ก ๋ณ๊ฒฝ.
- ๋จ๋
์ผ๋ก ์ฐ๋ฉด ๋ณ๋ก ์์์๊ฒ ๋์ด. Lora๋ ํฉ์ณ์ธ๋๋ง ์ฐ์ธ์.
- Lora ์ ์ฉ ์ ๊ณผ์ ํฉ์ด ์ผ์ด๋์ง ์๋๋ก ํํ๋ ฅ์ ์กฐ์ ํ ๋ฒ์ ๋ฏน์ค
<table style="width: 100%;">
<tbody>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 1%;"></td>
</tr>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/1Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/1After.png"></td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/2Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/2After.png"></td>
<td style="width: 1%;"></td>
</tr>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 1%;"></td>
</tr>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/3Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/3After.png"></td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/4Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/4After.png"></td>
<td style="width: 1%;"></td>
</tr>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;">๐ K20 (t2i)</td>
<td style="width: 22.5%;">๐ข K10 (i2i)</td>
<td style="width: 1%;"></td>
</tr>
<tr>
<td style="width: 1%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/5Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/5After.png"></td>
<td style="width: 3%;"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/6Before.png"></td>
<td style="width: 22.5%;"><img src="https://huggingface.co/Unite/Union_mix/resolve/main/Preview/6After.png"></td>
<td style="width: 1%;"></td>
</tr>
</tbody>
</table>
```
```
## โญ ๊ถ์ฅ ํ๋กฌํํธ
- **ํ๊ตญ์ธ** : masterpiece, high quality, delicate, finely detailed, (photography, photorealistic), (a cute 16 year old korean girl:1.3), (bishoujo, loli:0.5), 1 girl, (korean mixed, bare face:0.9), [(beautiful eyes, detailed face:1.2):0.2], brown eyes, (light smile:1.2),
- **์ธ๊ตญ์ธ** : masterpiece, high quality, delicate, finely detailed, (photography, photorealistic), (a cute 16 year old ใ๊ตญ์ ใ girl:1.2), (bishoujo, loli:0.5), 1 girl, [(beautiful eyes, detailed face:1.2):0.2], brown eyes, (light smile:1.2),
## โ ๊ถ์ฅ ๋ค๊ฑฐํฐ๋ธ
- **Negatives** : (worst quality, low quality:1.4), (bad_prompt, bad_prompt_version2:0.8), [(deep_negative:1.2):0.1], (comic, cartoon, mature, infant, baby:0.5), (child, asian, flat nose:0.3),
## ๐บ ๊พธ์๊พธ ๋ฉ์ดํฌ์
๋ค๊ฑฐํฐ๋ธ
- [(eyeshadow, thick eyelashes:1.1), (heavy makeup, long eyelashes, eyelashes makeup:0.9), (eyeliner:0.7), (comic, cartoon, mature, infant, baby:0.5), (child, asian, flat nose:0.3), (aegyo sal:0.1)::0.9],
- ์งํ ์ธ๊ตญ์ธ ํ์ฅ์ด ๋์จ๋ค๋ฉด ์ ๋ด์ฉ์ ๋ค๊ฑฐํฐ๋ธ์ ์ง์ด๋ฃ์๊ฒ!
```
```
## โ๏ธ ๊ถ์ฅ ์ธํ
- **Clip skip** : 2
- **Sampler** : DPM++ 2S a Karras
- **VAE** : kl-f8-anime2
- **Sampling steps** : 20 ์ด์
- **CFG Scale** : 7 ~ 12 (8 ์ถ์ฒ)
- **Width / Height** : ๊ฐ๋ก ์ธ๋ก 480px ์ด์
## ๐ ๊ถ์ฅ ์
์ค์ผ์ผ
- **Hires. fix** : True
- **Upscaler** : R-ESRGAN General WDN 4xV3
- **Denoising strength** : 0.5
## ๐งฉ ์ผ๊ตด ๋ณต์ ์ฌ์ฉ ์ ๊ถ์ฅ์์น
- **Restore faces** : True
- **Face restoration model** : CodeFormer
- **CodeFormer Weight** : 0.75 ~ 0.9
```
```
## โ ๋จ์
- ์ ์ฒด ๋ํ
์ผ์ด๋ ์ง๊ฐ์ ์กฐ๊ธ ์ฝํจ ใ
ใ
๊ฐ๊ธ์ ์ด๋ฉด ์ท์ ์
ํ์ฃผ์ธ์!
- ์ฑ์ํ ์ฑ์ธ ์ผ๊ตด์ ์ํ๋ฉด ๋ค๋ฅธ ๋ชจ๋ธ์ ์ฐ๊ฑฐ๋ ์์ด์ฐ๋๊ฑธ ์ถ์ฒ!
|
CL/safe-math-bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-28T00:01:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: electra-small-discriminator-finetuned-HC3-mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-HC3-mix
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1812
- F1: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1116 | 1.0 | 8956 | 0.2602 | 0.8838 |
| 0.0826 | 2.0 | 17912 | 0.1812 | 0.9206 |
| 0.0692 | 3.0 | 26868 | 0.2226 | 0.9143 |
| 0.0539 | 4.0 | 35824 | 0.2544 | 0.9126 |
| 0.0492 | 5.0 | 44780 | 0.2500 | 0.9175 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
CLAck/indo-mixed | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---

Cornflower is a comprehensive painting model based on StableDiffusion, trained with specific styles of illustration and merged with multiple models, which is theoretically somewhat different from real-life human painters.
**Since the Cornflower model contains multiple files, you need to place all the files in the appropriate locations.**
### How to install?
**'cornflower_v7.safetensors'** and **vae file** are placed in the Stable Diffusion model directory.
The .pt files in **'embeddings'** folder are placed in the embeddings directory.
**'cornflower_v7_phantom.pt'** in hypernetwork folder is placed in the Hypernetworks model directory.
### How to use?
After the installation is complete, open webui and switch checkpoint to 'cornflower_v7.safetensors', Hypernetwork to 'cornflower_v7_phantom'.
The following parameters are recommended, and the sampler recommends DPM2 a Karras.
Steps: 20, Sampler: DPM2 a Karras, CFG scale: 7, Size: 640x960, Clip skip: 2, ENSD: 31337 |
CLEE/CLEE | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 558.00 +/- 245.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga moonlightlane -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga moonlightlane -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga moonlightlane
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.