modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
|---|---|---|---|---|---|---|
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
|
[
"pytorch",
"safetensors",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
image-classification
|
{
"architectures": [
"ConvNextForImageClassification"
],
"model_type": "convnext",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 56 | null |
---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 1544.89 +/- 623.99
name: mean_reward
verified: false
---
# **TRPO** Agent playing **Humanoid-v3**
This is a trained model of a **TRPO** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
AkshatSurolia/DeiT-FaceMask-Finetuned
|
[
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
image-classification
|
{
"architectures": [
"DeiTForImageClassification"
],
"model_type": "deit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 46 | null |
---
library_name: stable-baselines3
tags:
- Swimmer-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v3
type: Swimmer-v3
metrics:
- type: mean_reward
value: 148.22 +/- 5.52
name: mean_reward
verified: false
---
# **TRPO** Agent playing **Swimmer-v3**
This is a trained model of a **TRPO** agent playing **Swimmer-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Swimmer-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Swimmer-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env Swimmer-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Swimmer-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.9999),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
AkshatSurolia/ICD-10-Code-Prediction
|
[
"pytorch",
"bert",
"transformers",
"text-classification",
"license:apache-2.0",
"has_space"
] |
text-classification
|
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 994 | null |
---
library_name: stable-baselines3
tags:
- Swimmer-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v3
type: Swimmer-v3
metrics:
- type: mean_reward
value: 361.63 +/- 0.84
name: mean_reward
verified: false
---
# **TRPO** Agent playing **Swimmer-v3**
This is a trained model of a **TRPO** agent playing **Swimmer-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Swimmer-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env Swimmer-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Swimmer-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env Swimmer-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Swimmer-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.9999),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 1000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
AkshaySg/gramCorrection
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50-CD45RB-1000-att
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50-CD45RB-1000-att
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4338 | 1.0 | 94 | 2.4137 |
| 2.9565 | 2.0 | 188 | 2.1738 |
| 2.7101 | 3.0 | 282 | 2.0122 |
| 2.7515 | 4.0 | 376 | 1.9646 |
| 2.724 | 5.0 | 470 | 2.1284 |
| 2.6193 | 6.0 | 564 | 1.9380 |
| 2.5032 | 7.0 | 658 | 1.9286 |
| 2.5342 | 8.0 | 752 | 1.9366 |
| 2.5519 | 9.0 | 846 | 1.9736 |
| 2.4988 | 10.0 | 940 | 1.8816 |
| 2.5101 | 11.0 | 1034 | 1.8454 |
| 2.4441 | 12.0 | 1128 | 1.8143 |
| 2.3857 | 13.0 | 1222 | 1.7919 |
| 2.2877 | 14.0 | 1316 | 1.7400 |
| 2.3013 | 15.0 | 1410 | 1.7409 |
| 2.3134 | 16.0 | 1504 | 1.7698 |
| 2.3423 | 17.0 | 1598 | 1.7581 |
| 2.3536 | 18.0 | 1692 | 1.7658 |
| 2.2957 | 19.0 | 1786 | 1.7329 |
| 2.274 | 20.0 | 1880 | 1.7335 |
| 2.2906 | 21.0 | 1974 | 1.7343 |
| 2.2492 | 22.0 | 2068 | 1.7080 |
| 2.2516 | 23.0 | 2162 | 1.7180 |
| 2.2574 | 24.0 | 2256 | 1.7081 |
| 2.2508 | 25.0 | 2350 | 1.7065 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
AkshaySg/langid
|
[
"multilingual",
"dataset:VoxLingua107",
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"license:apache-2.0"
] |
audio-classification
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: unagui/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Al/mymodel
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="annelegendre/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlErysvi/Erys
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="annelegendre/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alaeddin/convbert-base-turkish-ner-cased
|
[
"pytorch",
"convbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"ConvBertForTokenClassification"
],
"model_type": "convbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
language: en
license: mit
datasets:
- ronig/pdb_sequences
---
# PDB Protein BPE Tokenizer
A protein sequence tokenizer trained on [PDB Sequences](https://huggingface.co/datasets/ronig/pdb_sequences) with `vocabulary size = 1024`
|
AlbertHSU/ChineseFoodBert
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 562.50 +/- 189.73
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xaeroq -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga xaeroq -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga xaeroq
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Alberto15Romero/GptNeo
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
metrics:
- type: mean_reward
value: 621.81 +/- 174.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
Aleenbo/Arcane
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
metrics:
- type: mean_reward
value: 596.65 +/- 160.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
Aleksandar/bert-srb-base-cased-oscar
|
[
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
metrics:
- type: mean_reward
value: 542.89 +/- 310.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env CarRacing-v0 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo --env CarRacing-v0 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env CarRacing-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env CarRacing-v0 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('clip_range', 0.2),
('ent_coef', 0.0),
('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 2}},
{'gym.wrappers.resize_observation.ResizeObservation': {'shape': 64}},
{'gym.wrappers.gray_scale_observation.GrayScaleObservation': {'keep_dim': True}}]),
('frame_stack', 2),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 'lin_1e-4'),
('max_grad_norm', 0.5),
('n_envs', 8),
('n_epochs', 10),
('n_steps', 512),
('n_timesteps', 4000000.0),
('normalize', "{'norm_obs': False, 'norm_reward': True}"),
('policy', 'CnnPolicy'),
('policy_kwargs',
'dict(log_std_init=-2, ortho_init=False, activation_fn=nn.GELU, '
'net_arch=dict(pi=[256], vf=[256]), )'),
('sde_sample_freq', 4),
('use_sde', True),
('vf_coef', 0.5),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
Aleksandar/bert-srb-ner
|
[
"pytorch",
"bert",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: 122.28 +/- 111.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **A2C** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.001),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 32),
('n_steps', 8),
('n_timesteps', 200000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Aleksandar/distilbert-srb-base-cased-oscar
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: 219.89 +/- 95.78
name: mean_reward
verified: false
---
# **A2C** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **A2C** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo a2c --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo a2c --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo a2c --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('ent_coef', 0.001),
('gae_lambda', 0.9),
('gamma', 0.99),
('learning_rate', 'lin_0.0008'),
('max_grad_norm', 0.5),
('n_envs', 32),
('n_steps', 8),
('n_timesteps', 200000000.0),
('normalize', True),
('normalize_advantage', False),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-2, ortho_init=False)'),
('use_rms_prop', True),
('use_sde', True),
('vf_coef', 0.4),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Aleksandar1932/gpt2-pop
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -1.12 +/- 0.02
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Aleksandar1932/gpt2-rock-124439808
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.94 +/- 6.35
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r bonadio/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Aleksandar1932/gpt2-soul
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -2.85 +/- 0.24
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
AlekseyKorshuk/bert
|
[
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 31 | null |
---
library_name: stable-baselines3
tags:
- BipedalWalkerHardcore-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: RecurrentPPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalkerHardcore-v3
type: BipedalWalkerHardcore-v3
metrics:
- type: mean_reward
value: -14.95 +/- 35.98
name: mean_reward
verified: false
---
# **RecurrentPPO** Agent playing **BipedalWalkerHardcore-v3**
This is a trained model of a **RecurrentPPO** agent playing **BipedalWalkerHardcore-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo_lstm --env BipedalWalkerHardcore-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.2'),
('ent_coef', 0.001),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 'lin_3e-4'),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 100000000.0),
('normalize', True),
('policy', 'MlpLstmPolicy'),
('policy_kwargs',
'dict( ortho_init=False, activation_fn=nn.ReLU, '
'lstm_hidden_size=64, enable_critic_lstm=True, '
'net_arch=dict(pi=[64], vf=[64]) )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Alessandro/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Ant-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ARS
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v3
type: Ant-v3
metrics:
- type: mean_reward
value: 4762.99 +/- 159.24
name: mean_reward
verified: false
---
# **ARS** Agent playing **Ant-v3**
This is a trained model of a **ARS** agent playing **Ant-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ars --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env Ant-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ars --env Ant-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo ars --env Ant-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ars --env Ant-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ars --env Ant-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('alive_bonus_offset', -1),
('delta_std', 0.025),
('learning_rate', 0.015),
('n_delta', 60),
('n_envs', 1),
('n_timesteps', 75000000.0),
('n_top', 20),
('normalize', 'dict(norm_obs=True, norm_reward=False)'),
('policy', 'LinearPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
Alfia/anekdotes
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Walker2d-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v3
type: Walker2d-v3
metrics:
- type: mean_reward
value: 4637.07 +/- 15.65
name: mean_reward
verified: false
---
# **TD3** Agent playing **Walker2d-v3**
This is a trained model of a **TD3** agent playing **Walker2d-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2d-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2d-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2d-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2d-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Walker2d-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Walker2d-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
Alifarsi/t5-small-finetuned-xsum
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3450.33 +/- 14.87
name: mean_reward
verified: false
---
# **TD3** Agent playing **Hopper-v3**
This is a trained model of a **TD3** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('train_freq', 1),
('normalize', False)])
```
|
Aliraza47/BERT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Hopper-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v3
type: Hopper-v3
metrics:
- type: mean_reward
value: 3592.92 +/- 5.20
name: mean_reward
verified: false
---
# **TD3** Agent playing **Hopper-v3**
This is a trained model of a **TD3** agent playing **Hopper-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Hopper-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Hopper-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Hopper-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Hopper-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Hopper-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('train_freq', 1),
('normalize', False)])
```
|
Aliskin/xlm-roberta-base-finetuned-marc
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 5597.50 +/- 614.54
name: mean_reward
verified: false
---
# **TD3** Agent playing **Humanoid-v3**
This is a trained model of a **TD3** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 10000),
('n_timesteps', 2000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('train_freq', 1),
('normalize', False)])
```
|
Aliyyu/Keren
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Walker2d-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TD3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v3
type: Walker2d-v3
metrics:
- type: mean_reward
value: 5216.86 +/- 30.97
name: mean_reward
verified: false
---
# **TD3** Agent playing **Walker2d-v3**
This is a trained model of a **TD3** agent playing **Walker2d-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2d-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2d-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2d-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2d-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo td3 --env Walker2d-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Walker2d-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
Amit29/t5-small-finetuned-xsum
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
---
𝓢𝓾𝓹𝓹𝓸𝓻𝓽 𝓜𝓮 𝓞𝓷\
🧋[**Buymeacoffee**](https://www.buymeacoffee.com/TheSkinnyRat) |☕[**Ko-Fi**](https://ko-fi.com/TheSkinnyRat) |🍵[**Saweria**](https://saweria.co/TheSkinnyRat)
# Info
> **Author:** [TheSkinnyRat](https://huggingface.co/TheSkinnyRat)\
> **Trainer:** [Linaqruf/kohya-trainer](https://github.com/Linaqruf/kohya-trainer)\
> **Type:** LoRA
# Trigger Words:
- `elaina \(majo no tabitabi\)`
- `saya \(majo no tabitabi\)`
# Description
This LoRA is trained with a lot of datasets coming from anime screenshotted + fan art images.\
Extracted image training from anime video using [anime_screenshot_pipeline](https://github.com/cyber-meow/anime_screenshot_pipeline).
# Download
> [Model download](https://huggingface.co/TheSkinnyRat/LoRA-majo_no_tabitabi/tree/main/v1)
# Training
> **Pre-trained Model:** [nai-wd.ckpt](https://huggingface.co/andite/training_models/tree/main)\
> **Dataset:** 5.640 images\
> **Repeats:** 1\
> **Total:** 5.640 images
# Preview
> - [https://civitai.com/models/26382](https://civitai.com/models/26382)
> - [https://civitai.com/models/26412](https://civitai.com/models/26412)
<details>
<summary><big>Image Preview</big></summary>





</details>
|
Andres2015/HiggingFaceTest
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 7064.26 +/- 1695.04
name: mean_reward
verified: false
---
# **TQC** Agent playing **Humanoid-v3**
This is a trained model of a **TQC** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo tqc --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo tqc --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo tqc --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('learning_starts', 10000),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
AndrewChar/model-QA-5-epoch-RU
|
[
"tf",
"distilbert",
"question-answering",
"ru",
"dataset:sberquad",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 109 | 2023-02-28T17:00:15Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -142.83 +/- 84.11
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'parsasam/PPO-LunarLander-v2-unit8'
'batch_size': 512
'minibatch_size': 128}
```
|
AnonymousSub/AR_rule_based_roberta_only_classfn_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: openrail
---
### Check README.md
## 🦒 Backup
| Checkpoint Name | File Name | IPFS Link
| --- | --- | --- |
Chilloutmix |chilloutmix_NiPrunedFp32fix.safetensors | https://crustipfs.live/ipfs/QmPdAvVLWQYWoyRQ5yw6ahoZAH3CtwbL3srgwA1fHZpyaE?filename=chilloutmix_NiPrunedFp32fix.safetensors
Anything | anything-v4.5-pruned.safetensors | https://crustipfs.art/ipfs/QmTwPeYnyMsR954BJAeN5nMpPeXyPDa3gx9SzXhijqXmxk?filename=anything-v4.5-pruned.safetensors
Counterfeit | CounterfeitV25.safetensors | https://crustipfs.info/ipfs/QmXnBaJ7PYTGwRWqGQDpxnSZ9knJ2Qo55GABqbSKtG2K9X?filename=CounterfeitV25.safetensors
AOAOKO [PVC Style Model] | aoaokoPVCStyleModel_pvcAOAOKO.safetensors | https://crustipfs.art/ipfs/QmXZw3eLdAYFSeFzTD9TwsQNURRwc3WGiBJDcbBoQfpM59?filename=aoaokoPVCStyleModel_pvcAOAOKO.safetensors
| LoRAs | File Name | IPFS Link
| --- | --- | --- |
koreanDollLikeness_v10 | koreanDollLikeness_v10.safetensors | https://crustipfs.live/ipfs/QmRp8w1LKhUmZ7DVFV4hJ4ynfH1yHW8vaucBGqEaQdTKRP?filename=koreanDollLikeness_v10.safetensors
koreanDollLikeness_v15 | koreanDollLikeness_v15.safetensors | https://crustipfs.live/ipfs/QmWsZPjhfYmsEwZMwWUmijsKk9vV7fD5aMPN9TvJr73wE6?filename=koreanDollLikeness_v15.safetensors
japaneseDollLikeness_v10 | japaneseDollLikeness_v10.safetensors | https://crustipfs.info/ipfs/QmTdjBJSsmt4EF4mLJBV72CBYPnGbacMReaLxYKF3pyVAx?filename=japaneseDollLikeness_v10.safetensors
Yae Miko Realistic Genshin | yaeMikoRealistic_yaemikoMixed.safetensors | https://crustipfs.live/ipfs/QmQ2ho3sjMGUsWcHspBjCNeq9mZrEWNzpQTRd2jqYTJZdo?filename=yaeMikoRealistic_yaemikoMixed.safetensors
Raiden Shogun | raidenShogunRealistic_raidenshogun.safetensors | https://crustipfs.art/ipfs/Qmby36vJGCu1NWzGkQUfnb2HWkYo1HkUG87YqR3qqHNr9m?filename=raidenShogunRealistic_raidenshogun.safetensors
Reverse translucent bunnysuit | reverseTranslucentBunny.safetensors | https://ipfs.teahouse.finance/ipfs/bafybeih3ur3vbdbckqxgl5u5bmzgvicvidfu5w6w7exncdgre34ivuptje
上吊 hanged | hanged_v1.safetensors | https://mymodels.4everland.store/hanged_v1.safetensors
chengYuXin | chengYuXin_v10.safetensors | https://crustipfs.art/ipfs/QmQAt63HJ9hRt6MLduX1AJKv7W37UwCUZYbDxnvm5S4435?filename=chengYuXin_v10.safetensors
小柔 xiaorouseeu | xiaorouseeu_v10.saftensors | https://crustipfs.art/ipfs/QmV15kmML2VJSLcySKcUfMM48Tki5ykU1YaGE7rThHjyVs?filename=xiaorouseeu_v10.saftensors
| VAEs | File Name | IPFS Link
| --- | --- | --- |
Anything-v4.0.vae | anything-4.0.vae.pt | https://crustipfs.info/ipfs/QmNqqrXKNAw1wVMYD7Lyppz7G4iyomPxyifUT55k5AMBbu?filename=anything-4.0.vae.pt
|
AnonymousSub/AR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: active_learn_econ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# active_learn_econ
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6845
- Accuracy: 0.5476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 11 | 0.6898 | 0.5476 |
| No log | 2.0 | 22 | 0.6845 | 0.5476 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.13.2
|
AnonymousSub/SR_cline
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight: 600">Past early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-12B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-12B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-12B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-12B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-12B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-12B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-12B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-12B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-12B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
AnonymousSub/SR_consert
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-remittance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-remittance
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | 2023-02-28T19:14:56Z |
---
license: apache-2.0
language:
- hu
metrics:
- accuracy
model-index:
- name: huBERTPlain
results:
- task:
type: text-classification
metrics:
- type: f1
value: 0.91
widget:
- text: "A vegetációs időben az országban rendszeresen jelentkező jégesők ellen is van mód védekezni lokálisan, ki-ki a saját nagy értékű ültetvényén."
example_title: "Positive"
- text: "Magyarország több évtizede küzd demográfiai válsággal, és egyre több gyermekre vágyó pár meddőségi problémákkal néz szembe."
exmaple_title: "Negative"
- text: "Tisztelt fideszes, KDNP-s Képviselőtársaim!"
example_title: "Neutral"
---
## Model description
Cased fine-tuned BERT model for Hungarian, trained on (manuallay anniated) parliamentary pre-agenda speeches scraped from `parlament.hu`.
## Intended uses & limitations
The model can be used as any other (cased) BERT model. It has been tested recognizing positive, negative and neutral sentences in (parliamentary) pre-agenda speeches, where:
* 'Label_0': Neutral
* 'Label_1': Positive
* 'Label_2': Negative
## Training
Fine-tuned version of the original huBERT model (`SZTAKI-HLT/hubert-base-cc`), trained on HunEmPoli corpus.
## Eval results
| Class | Precision | Recall | F-Score |
|-----|------------|------------|------|
|Neutral|0.83|0.71|0.76|
|Positive|0.87|0.91|0.9|
|Negative|0.94|0.91|0.93|
|Macro AVG|0.88|0.85|0.86|
|Weighted WVG|0.91|0.91|0.91|
## Usage
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("poltextlab/HunEmBERT3")
model = AutoModelForSequenceClassification.from_pretrained("poltextlab/HunEmBERT3")
```
### BibTeX entry and citation info
If you use the model, please cite the following paper:
Bibtex:
```bibtex
@{
}
```
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 124.50 +/- 111.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1808.12 +/- 502.42
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/bert_hier_diff_equal_wts_epochs_1_shard_10
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Sorenmc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/bert_snips
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | 2023-02-28T20:18:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 38.7231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9223
- Rouge1: 38.7231
- Rouge2: 16.4719
- Rougel: 32.3585
- Rougelsum: 35.8234
- Gen Len: 16.209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.1235 | 1.0 | 921 | 1.9223 | 38.7231 | 16.4719 | 32.3585 | 35.8234 | 16.209 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/bert_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | 2023-02-28T20:18:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-8800-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9125
- name: F1
type: f1
value: 0.9113924050632912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-8800-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6248
- Accuracy: 0.9125
- F1: 0.9114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/cline-techqa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ElementBrawlerAI/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/cline_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1786.76 +/- 84.87
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/consert-techqa
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: spaladugu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/declutr-emanuals-techqa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ElementBrawlerAI/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/declutr-model
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-02-28T20:55:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.78 +/- 17.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/declutr-model_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.52 +/- 14.42
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/declutr-model_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 806.61 +/- 75.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/declutr-roberta-papers
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 163 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 326,
"warmup_steps": 33,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
AnonymousSub/declutr-s10-AR
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 26 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Renforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.30 +/- 21.26
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/declutr-techqa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 5 | null |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.80 +/- 27.26
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/roberta-base_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.53 +/- 0.50
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ElementBrawlerAI/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-pokemons-128_300_epochs_1000_steps_final_Cont_cont
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 11
- eval_batch_size: 12
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-pokemons-128_300_epochs_1000_steps_final_Cont_cont/tensorboard?#scalars)
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- text2text-generation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base_aspect
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base_aspect
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3517
- Rouge1: 69.6359
- Rouge2: 0.0
- Rougel: 69.5912
- Rougelsum: 69.621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 0.8151 | 1.0 | 837 | 0.3616 | 68.6064 | 0.0 | 68.5169 | 68.6362 |
| 0.3537 | 2.0 | 1674 | 0.3517 | 69.6359 | 0.0 | 69.5912 | 69.621 |
| 0.3373 | 3.0 | 2511 | 0.3533 | 70.3671 | 0.0 | 70.3671 | 70.4267 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_10
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-02-28T22:46:41Z |
---
datasets:
- SirNeural/flan_v2
metrics:
- perplexity
tags:
- flan
- opt
- peft
---
## ptune-FLAN-OPT-6.7b
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
This model is [facebook/opt-6.7b](https://hf.co/facebook/opt-6.7b) finetuned with prefix tuning (https://arxiv.org/abs/2101.00190) on the FLAN datasets (https://arxiv.org/pdf/2210.11416.pdf).
A 24 token prefix was finetuned over 1.5m new tokens of a FLAN task mixture, with the start of each example cut off if it was too large to fit within a 256 token context.
The model reaches a train ppl of 6.09 and an eval ppl of 5.91.
### Example COT (Chain-of-thought) Prompt:
```
Q: Answer the following yes/no question by reasoning step-by-step. Could a dandelion suffer from hepatitis?
A: Hepatitis only affects organisms with livers. Dandelions don’t have a liver. The answer is no.
Q: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?
A: A haiku is a japanese three-line poem. That is short enough to fit in 280 characters. The answer is yes.
Q: Answer the following yes/no question by reasoning step-by-step. Can you reach space with a Cessna?
A:
```
```
> A Cessna is a small plane that can carry up to 6 people. The answer is no.
```
(Completed with Contrastive Sampling, top_k: 4, penalty_alpha: 0.6)
|
AnonymousSub/rule_based_roberta_hier_triplet_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | 2023-02-28T22:48:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: fedcsis-intent_baseline-xlm_r-leyzer_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fedcsis-intent_baseline-xlm_r-leyzer_en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4646
- Accuracy: 0.9082
- F1: 0.9082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 3.4394 | 1.0 | 814 | 1.7504 | 0.6457 | 0.6457 |
| 1.355 | 2.0 | 1628 | 0.9345 | 0.8164 | 0.8164 |
| 0.9344 | 3.0 | 2442 | 0.5652 | 0.8841 | 0.8841 |
| 0.4972 | 4.0 | 3256 | 0.3784 | 0.9295 | 0.9295 |
| 0.2867 | 5.0 | 4070 | 0.2496 | 0.9562 | 0.9562 |
| 0.2216 | 6.0 | 4884 | 0.1962 | 0.9689 | 0.9689 |
| 0.1354 | 7.0 | 5698 | 0.1570 | 0.9716 | 0.9716 |
| 0.0957 | 8.0 | 6512 | 0.1376 | 0.9774 | 0.9774 |
| 0.0827 | 9.0 | 7326 | 0.1289 | 0.9783 | 0.9783 |
| 0.0711 | 10.0 | 8140 | 0.1248 | 0.9794 | 0.9794 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-large-t5large-English-to-BASH
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-t5large-English-to-BASH
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6448
- Nl2bash M: 0.7181
- Gen Len: 14.2079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Nl2bash M | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 1.8995 | 1.0 | 561 | 1.1364 | 0.5124 | 13.7261 |
| 1.1669 | 2.0 | 1122 | 0.9093 | 0.5966 | 13.9349 |
| 0.9508 | 3.0 | 1683 | 0.8024 | 0.645 | 13.7716 |
| 0.8426 | 4.0 | 2244 | 0.7366 | 0.6696 | 13.9492 |
| 0.7574 | 5.0 | 2805 | 0.6994 | 0.6888 | 14.099 |
| 0.6884 | 6.0 | 3366 | 0.6756 | 0.6946 | 14.2498 |
| 0.6301 | 7.0 | 3927 | 0.6573 | 0.7101 | 14.3782 |
| 0.6031 | 8.0 | 4488 | 0.6476 | 0.7165 | 14.1793 |
| 0.5536 | 9.0 | 5049 | 0.6465 | 0.7164 | 14.1989 |
| 0.5443 | 10.0 | 5610 | 0.6448 | 0.7181 | 14.2079 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
AnthonyNelson/DialoGPT-small-ricksanchez
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-03-01T00:56:57Z |
## Pretraining Without Attention(BiGS) <br>
## Official JAX Models with maximal sequence length 512<br>
### [Paper](https://arxiv.org/abs/2212.10544) | [](https://huggingface.co/JunxiongWang) | [](https://colab.research.google.com/drive/1Fz3OSRF3PZEF_dlnyJ3KZ8Bq35DfUrIB?usp=sharing)
<img width="537" alt="BiGS" src="https://user-images.githubusercontent.com/16102460/221464744-06b6538a-7e84-4c95-909f-239eab1dba71.png">
This [repository](https://github.com/jxiw/BiGS) contains BiGS's jax model definitions, pretrained models weights, training and fintuning code for our paper exploring using state space models for pretraining. You can find more details in our paper.
[**Pretraining Without Attention**](https://arxiv.org/abs/2212.10544)<br>
[Junxiong Wang](), [Jing Nathan Yan](), [Albert Gu](), [Alexander M.Rush]()
<br>Cornell University, Cornell Tech, DeepMind<br>
Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar accuracy, the approach has significantly different inductive biases than BERT in terms of interactions and syntactic representations.
### Load Masked Language Model
```python
import jax
from jax import numpy as jnp
from transformers import BertTokenizer
from BiGS.modeling_flax_bigs import FlaxBiGSForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = FlaxBiGSForMaskedLM.from_pretrained('JunxiongWang/BiGS_512')
text = "The goal of life is [MASK]."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=512)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
# output: ['happiness', 'love', 'peace', 'perfection', 'life', 'enlightenment', 'god', 'survival', 'freedom', 'good']
jnp.flip(jnp.sort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10]
# probability: [0.16052087, 0.04306792, 0.03651363, 0.03468223, 0.02927081, 0.02549769, 0.02385132, 0.02261189, 0.01672831, 0.01619471]
text = "Paris is the [MASK] of France."
encoded_input = tokenizer(text, return_tensors='np', padding='max_length', max_length=512)
output = model(**encoded_input)
tokenizer.convert_ids_to_tokens(jnp.flip(jnp.argsort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10])
# output: ['capital', 'centre', 'center', 'city', 'capitol', 'prefecture', 'headquarters', 'president', 'metropolis', 'heart']
jnp.flip(jnp.sort(jax.nn.softmax(output.logits[encoded_input['input_ids']==103]))[0])[:10]
# probability: [0.9981787 , 0.00034076, 0.00026992, 0.00026926, 0.00017787, 0.00004816, 0.00004256, 0.00003716, 0.00003634, 0.00002893]
```
### Load Sequence Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForSequenceClassification
model = FlaxBiGSForSequenceClassification.from_pretrained('JunxiongWang/BiGS_512')
```
### Load Question Answering Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForQuestionAnswering
model = FlaxBiGSForQuestionAnswering.from_pretrained('JunxiongWang/BiGS_512')
```
### Load Multiple Choice Classification Model
```python
from BiGS.modeling_flax_bigs import FlaxBiGSForMultipleChoice
model = FlaxBiGSForMultipleChoice.from_pretrained('JunxiongWang/BiGS_512')
```
### GLUE Experiments
GLUE is made up of a total of 9 different tasks. You can use this python [script](https://github.com/jxiw/BiGS/blob/main/run_glue2.py) to run GLUE tasks.
We finetune BiGS on TPU-v3 with 8 cores. Since the batch size per device is 2, the total number of batch size is 16.
```
export TASK_NAME=cola
python run_glue2.py \
--model_name_or_path JunxiongWang/BiGS_512 \
--task_name $TASK_NAME \
--max_seq_length 512 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--per_device_train_batch_size 2 \
--logging_steps 100 \
--eval_steps 500 \
--weight_decay 0.01 \
--output_dir BiGS_$TASK_NAME/
```
Those give us the following result
| Task | Metric | Result |
|-------|------------------------------|-------------|
| CoLA | Matthews corr | 67.9 |
| SST-2 | Accuracy | 93.8 |
| QQP | Accuracy/F1 | 91.4/88.4 |
| MNLI | Matched acc./Mismatched acc. | 86.2 |
| QNLI | Accuracy | 91.6 |
| MRPC | F1/Accuracy | 86.4/80.4 |
| STS-B | Pearson/Spearman corr. | 89.1/89.0 |
| RTE | Accuracy | 73.3 |
If you use our models, please cite the following papers.
```
@article{wang2022pretraining,
title={Pretraining Without Attention},
author={Wang, Junxiong and Yan, Jing Nathan and Gu, Albert and Rush, Alexander M},
journal={arXiv preprint arXiv:2212.10544},
year={2022}
}
```
|
Anthos23/my-awesome-model
|
[
"pytorch",
"tf",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | 2023-03-01T01:29:54Z |
---
license: creativeml-openrail-m
---
这是自用的采集目录,非原创,勿下载传播
|
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_small_A-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_small_A-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3096
- Rouge1: 0.1758
- Rouge2: 0.0431
- Rougel: 0.1616
- Rougelsum: 0.1616
- Gen Len: 8.2832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.8609 | 1.0 | 527 | 1.6450 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.6144 | 2.0 | 1054 | 1.4493 | 0.0233 | 0.0006 | 0.0213 | 0.0214 | 1.7788 |
| 1.5156 | 3.0 | 1581 | 1.3990 | 0.1232 | 0.025 | 0.1137 | 0.1133 | 8.2124 |
| 1.3827 | 4.0 | 2108 | 1.3733 | 0.1276 | 0.0225 | 0.1211 | 0.121 | 8.8938 |
| 1.3815 | 5.0 | 2635 | 1.3547 | 0.1383 | 0.0366 | 0.1289 | 0.1298 | 8.3982 |
| 1.3366 | 6.0 | 3162 | 1.3406 | 0.1498 | 0.0347 | 0.1399 | 0.1407 | 8.6903 |
| 1.2798 | 7.0 | 3689 | 1.3313 | 0.1678 | 0.0373 | 0.1581 | 0.1564 | 8.6195 |
| 1.2619 | 8.0 | 4216 | 1.3205 | 0.1678 | 0.0398 | 0.1592 | 0.1581 | 9.2212 |
| 1.3182 | 9.0 | 4743 | 1.3170 | 0.1689 | 0.0369 | 0.1573 | 0.157 | 8.1593 |
| 1.2617 | 10.0 | 5270 | 1.3128 | 0.169 | 0.038 | 0.1555 | 0.1554 | 8.2832 |
| 1.2902 | 11.0 | 5797 | 1.3105 | 0.1647 | 0.0333 | 0.1518 | 0.1515 | 8.3363 |
| 1.1897 | 12.0 | 6324 | 1.3096 | 0.1758 | 0.0431 | 0.1616 | 0.1616 | 8.2832 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-03-01T01:57:17Z |
---
license: creativeml-openrail-m
---
This is a 50/50 weight merge of KoboldAI's adventure based
language model GPT-J-6B-Skein, and PygmalionAI's Pygmalion-6b.
https://huggingface.co/KoboldAI/GPT-J-6B-Skein
https://huggingface.co/PygmalionAI/pygmalion-6b
|
ArBert/albert-base-v2-finetuned-ner-gmm
|
[
"pytorch",
"tensorboard",
"albert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"AlbertForTokenClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: creativeml-openrail-m
tags:
- not-for-all-audiences
---
|
ArBert/bert-base-uncased-finetuned-ner-kmeans
|
[
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | 2023-03-01T02:22:46Z |
---
language:
- en
datasets:
- English
tags:
- text generation
- pytorch
- causal-lm
- Writer-data
- NeMo
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
---
license: cc-by-4.0
# Palmyra 3B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
Palmyra 3B was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra 3B is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. Palmyra 3B uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation per GPT-3.
## Use case
Palmyra 3B is the fastest of Writer’s LLMs and can perform important tasks such as text parsing, simple classification, address correction, and keyword recognition. Providing more context drives even better performance.
## Training data
Palmyra 3B was trained on Writer’s custom dataset.
## Intended Use and Limitations
Palmyra 3B learns an inner representation of the English language that can be used to extract features useful for downstream tasks. However, the model is best at what it was pre-trained for which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Writer/palmyra-3B")
tokenizer = AutoTokenizer.from_pretrained("Writer/palmyra-3B")
```
### Limitations and Biases
Palmyra 3B core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra 3B to produce factually correct results.
Palmyra 3B was trained on Writer’s custom data. As with all language models, it is difficult to predict how Palmyra 3B will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results.
## Citation and Related Information
To cite this model:
```
@misc{Palmyra,
author = {Writer Engineering Team},
title = {{Palmyra 3B Parameter Autoregressive Language Model}},
howpublished = {\url{https://dev.writer.com}},
year = 2023,
month = March
}
```
|
ArBert/roberta-base-finetuned-ner-agglo-twitter
|
[
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-03-01T02:25:45Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-visquad-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-visquad-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
ArBert/roberta-base-finetuned-ner-kmeans-twitter
|
[
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-03-01T02:33:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rkulathumani/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rkulathumani/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1270
- Validation Loss: 0.2652
- Train Precision: 0.5982
- Train Recall: 0.3971
- Train F1: 0.4774
- Train Accuracy: 0.9447
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3311 | 0.3183 | 0.4125 | 0.1184 | 0.1840 | 0.9297 | 0 |
| 0.1630 | 0.2787 | 0.5688 | 0.3708 | 0.4490 | 0.9427 | 1 |
| 0.1270 | 0.2652 | 0.5982 | 0.3971 | 0.4774 | 0.9447 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ArBert/roberta-base-finetuned-ner-kmeans
|
[
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
language:
- en
tags:
- glm
- thudm
---
GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
Please refer to our paper for a detailed description of GLM:
[GLM: General Language Model Pretraining with Autoregressive Blank Infilling](https://arxiv.org/abs/2103.10360) (ACL 2022)
Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution)
Find more examples in our [Github repo](https://github.com/THUDM/GLM).
## Model description
`glm-2b` is pretrained on the [Pile](https://pile.eleuther.ai) dataset. It has 36 transformer layers, with hidden size 4096 and 64 attention heads in each layer. The model is pretrained with autoregressive blank filling objectives designed for natural language understanding, seq2seq, and language modeling. Find more details from our [repo](https://github.com/THUDM/GLM).
## How to use
Please refer the [instruction](https://github.com/THUDM/GLM#hugging-face-hub) in our Github repo.
We use three different mask tokens for different tasks: `[MASK]` for short blank filling, `[sMASK]` for sentence filling, and `[gMASK]` for left to right generation. You can find examples about different masks from [here](https://github.com/THUDM/GLM#left-to-right-generation--blank-filling-interactive). The prediction always begin with a special `<|startofpiece|>` token and ends with a `<|endofpiece|>` token.
## Citation
Please cite our paper if you find this code useful for your research:
```
@article{DBLP:conf/acl/DuQLDQY022,
author = {Zhengxiao Du and
Yujie Qian and
Xiao Liu and
Ming Ding and
Jiezhong Qiu and
Zhilin Yang and
Jie Tang},
title = {{GLM:} General Language Model Pretraining with Autoregressive Blank Infilling},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), {ACL} 2022, Dublin, Ireland,
May 22-27, 2022},
pages = {320--335},
publisher = {Association for Computational Linguistics},
year = {2022},
}
```
|
Aracatto/Catto
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T02:44:26Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: es
datasets:
- fisher_callhome_spanish
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/fisher_callhome_spanish_conformer`
This model was trained by Yifan Peng using fisher_callhome_spanish recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 568bd0808f7509f9735282537db4c68dc3bdf376
pip install -e .
cd egs2/fisher_callhome_spanish/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/fisher_callhome_spanish_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Feb 28 20:50:34 CST 2023`
- python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.13.1`
- Git hash: `568bd0808f7509f9735282537db4c68dc3bdf376`
- Commit date: `Tue Feb 28 06:06:06 2023 -0500`
## exp/asr_train_asr_conformer6_raw_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_conformer_asr_model_valid.acc.ave/callhome_devtest|3964|37989|68.2|23.8|7.9|6.5|38.3|79.2|
|decode_conformer_asr_model_valid.acc.ave/callhome_evltest|1829|19035|67.5|24.0|8.5|6.3|38.8|82.4|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev|3979|40961|83.3|12.0|4.6|4.0|20.7|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev2|3961|39888|83.7|12.1|4.1|4.7|20.9|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_test|3641|40011|85.7|10.7|3.6|5.2|19.4|61.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_conformer_asr_model_valid.acc.ave/callhome_devtest|3964|181052|83.6|6.7|9.7|6.0|22.4|79.2|
|decode_conformer_asr_model_valid.acc.ave/callhome_evltest|1829|91266|83.1|6.8|10.1|5.7|22.6|82.4|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev|3979|194297|93.0|2.7|4.3|3.9|10.9|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev2|3961|189965|93.5|2.7|3.9|4.2|10.7|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_test|3641|194507|94.6|2.2|3.2|4.7|10.1|61.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_conformer_asr_model_valid.acc.ave/callhome_devtest|3964|57692|65.2|19.2|15.6|4.6|39.4|79.2|
|decode_conformer_asr_model_valid.acc.ave/callhome_evltest|1829|28951|64.3|19.0|16.7|4.9|40.5|82.4|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev|3979|55907|83.1|9.8|7.1|3.8|20.7|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_dev2|3961|53966|83.8|10.0|6.2|4.3|20.4|63.2|
|decode_conformer_asr_model_valid.acc.ave/fisher_test|3641|54212|86.4|8.6|5.0|4.9|18.5|61.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer6.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer6_raw_bpe1000_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 4.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁que
- s
- ▁no
- ▁y
- ▁de
- ▁a
- ▁sí
- ▁la
- ▁es
- ▁en
- r
- n
- ▁yo
- ▁el
- ▁me
- ▁pero
- ▁lo
- ▁o
- ▁se
- ▁un
- ▁los
- ▁como
- ▁ah
- ▁con
- ▁por
- ▁si
- ▁eh
- ▁eso
- ▁una
- ▁porque
- a
- ▁mi
- ▁tu
- ▁para
- ▁más
- ▁ya
- ▁te
- do
- ▁bueno
- ▁hay
- l
- ▁ajá
- ▁pues
- u
- o
- ▁qué
- e
- c
- ▁le
- ▁entonces
- ▁está
- ra
- da
- ▁así
- ▁muy
- ▁las
- ▁claro
- ▁también
- ndo
- ▁todo
- ▁bien
- ▁uno
- d
- ▁sé
- re
- ▁mhm
- la
- to
- ta
- i
- ▁sea
- b
- t
- ▁ca
- é
- te
- es
- ▁son
- ▁aquí
- ▁al
- mos
- ▁gente
- ▁tiene
- ▁ha
- ▁mucho
- ca
- ▁cuando
- ▁oh
- ▁su
- ▁re
- g
- ▁mm
- ti
- ba
- ▁este
- ▁creo
- ▁va
- v
- lo
- ó
- le
- y
- ▁cómo
- í
- ci
- p
- na
- ▁del
- ce
- ▁verdad
- ro
- ▁tengo
- er
- ▁ellos
- ▁era
- ▁ahí
- ▁él
- ▁estoy
- ▁he
- ▁ahora
- m
- ▁
- f
- ma
- ▁llama
- ▁ma
- ▁cosas
- ri
- ▁años
- en
- ▁hace
- ▁co
- co
- ción
- h
- á
- de
- ▁e
- ▁nada
- ▁casa
- 'no'
- ▁algo
- ▁pa
- ▁estaba
- nta
- ga
- ar
- ▁in
- ▁están
- ▁gusta
- si
- ▁di
- ▁cu
- ▁dos
- mente
- ▁vi
- ▁veces
- ▁uh
- so
- ▁ella
- ▁tienen
- ▁ese
- di
- ▁fue
- ▁hacer
- se
- ▁siempre
- ▁so
- li
- ▁dice
- go
- ▁vez
- ▁soy
- ▁persona
- ▁ba
- ▁acá
- ▁digo
- ía
- ▁ejemplo
- ▁música
- ▁usted
- ron
- ▁ver
- cu
- ▁ve
- ▁ni
- ▁mucha
- sa
- ▁esa
- pe
- ría
- ve
- ▁ser
- ▁okay
- ▁pasa
- z
- ▁puede
- an
- ▁cosa
- ▁da
- ▁otra
- me
- ▁ay
- ▁habla
- al
- ▁sabes
- ▁después
- ja
- ▁tiempo
- nd
- ne
- ado
- mi
- pa
- ▁po
- ▁d
- ▁ju
- ▁i
- ▁otro
- ▁allá
- ▁m
- ica
- ▁estados
- tu
- ▁todos
- nte
- iendo
- va
- ▁donde
- ▁tan
- ▁mismo
- ▁esta
- ▁t
- ▁mo
- ▁ir
- ▁unidos
- ▁trabajo
- ▁poco
- tos
- cho
- ▁menos
- ▁ven
- ▁tenía
- ▁c
- ▁personas
- ▁tener
- za
- ▁mira
- ▁dónde
- mo
- ▁pro
- ▁mejor
- ▁li
- ▁nunca
- ▁decir
- ▁estás
- ▁l
- ▁tra
- ▁ciudad
- ▁per
- rio
- ▁pre
- ▁voy
- ▁exacto
- ▁tienes
- ▁había
- ▁f
- ▁sabe
- tra
- sta
- ▁muchos
- idad
- ▁país
- ▁p
- ▁mu
- ▁hasta
- ▁parte
- ▁igual
- ▁muchas
- ▁día
- mp
- ▁comp
- ▁parece
- ido
- ciones
- ▁pu
- ▁esto
- ▁nueva
- ▁nosotros
- jo
- ▁ex
- ▁problema
- ▁ro
- ▁pe
- ▁tema
- cia
- k
- ble
- ▁do
- ▁tres
- ▁van
- ▁sa
- ▁um
- ▁hm
- ▁estar
- ▁sal
- ▁s
- fi
- je
- ▁hola
- ▁york
- ▁tanto
- os
- ge
- ▁vida
- ▁familia
- ▁ra
- ▁les
- das
- ▁teléfono
- ie
- ▁mundo
- ▁hu
- ▁digamos
- ▁quiere
- nos
- ▁bastante
- ste
- j
- qui
- les
- ▁hablar
- res
- ▁b
- ▁u
- ▁español
- ▁tal
- ▁dios
- che
- ▁han
- ▁dinero
- pi
- ni
- ▁difícil
- st
- ▁v
- ▁gra
- dos
- ue
- ▁chi
- ▁em
- ▁dicen
- ▁antes
- tas
- era
- ▁año
- ▁vive
- ▁cierto
- ia
- rá
- ria
- po
- nt
- ▁religión
- que
- ▁ci
- ▁cinco
- ▁ne
- ió
- ▁cre
- ente
- ñ
- ir
- ▁car
- ▁to
- bo
- ▁casi
- men
- ▁niños
- ▁ti
- bu
- w
- il
- ▁cada
- ieron
- cha
- ▁dije
- x
- ▁pienso
- ▁g
- ▁llega
- ento
- ▁grande
- ▁todavía
- ▁realmente
- ▁alguna
- ▁todas
- ▁mar
- cio
- ▁acuerdo
- mb
- ▁segu
- rse
- ▁mal
- ▁estamos
- ▁tipo
- be
- ▁pone
- ▁eres
- ▁sin
- ▁tenemos
- encia
- ▁alguien
- cto
- tar
- ▁vos
- ▁fi
- ▁haciendo
- ▁quién
- ▁toda
- ▁viene
- io
- ura
- ▁pen
- ▁hombre
- ▁hacen
- ▁hablando
- ▁ayuda
- ▁hi
- ▁trata
- ▁hoy
- ito
- ten
- ▁na
- ▁exactamente
- ▁escucha
- ver
- un
- ▁conoce
- gue
- ño
- ▁filadelfia
- ita
- ▁poder
- ▁fa
- án
- ▁puedo
- ▁lugar
- ▁vamos
- ▁yeah
- ú
- ▁desde
- ▁pi
- lla
- ▁hora
- lu
- ▁otros
- ▁méxico
- ▁internet
- ▁res
- ▁solamente
- ban
- ▁usa
- ▁vas
- ▁fui
- ▁estado
- ▁lleva
- ▁mil
- ▁solo
- ▁entre
- ▁wow
- ▁éste
- ch
- den
- ▁frío
- ▁cree
- ▁caso
- ▁estudia
- ▁am
- ▁busca
- ▁dis
- ▁trabaja
- ▁fe
- ▁bo
- ▁hecho
- ▁pueden
- ▁poquito
- ter
- ▁vivir
- ▁amigo
- ▁cuánto
- ▁ga
- ▁esas
- ul
- ▁tampoco
- ▁hijos
- ▁unos
- el
- ▁cuatro
- ▁sus
- ▁com
- ▁aunque
- ▁seguro
- ▁ce
- ▁forma
- ▁debe
- los
- ▁ta
- cion
- ones
- ▁puedes
- ▁mamá
- ▁cuenta
- ▁mis
- ▁diferente
- ▁quiero
- ▁ho
- ▁vivo
- ▁celular
- ero
- ▁universidad
- ▁be
- ▁misma
- ▁deja
- ▁cuál
- ▁inglés
- ▁nombre
- dia
- ▁paga
- ▁ahorita
- ▁cambia
- gra
- ▁dan
- ▁allí
- ▁rico
- ▁puerto
- ▁buenas
- ▁manera
- ▁cri
- ▁días
- ▁ésta
- ▁cualquier
- ▁países
- ing
- tica
- ina
- ▁buena
- ▁nadie
- ▁decía
- ▁piensa
- ▁sobre
- ▁esposo
- ▁qui
- ▁chile
- tro
- ▁toma
- ▁dijo
- ▁quieren
- ▁película
- ▁semana
- ▁sistema
- ▁come
- ▁mujer
- ▁veo
- ▁n
- ida
- ▁otras
- ▁medio
- ón
- ▁diez
- ▁cerca
- ▁iba
- ico
- gan
- ▁necesita
- zo
- ▁oye
- ▁san
- ▁bu
- ▁entiendes
- tó
- rme
- ▁sería
- ▁argentina
- ▁momento
- miento
- ▁know
- ▁seis
- ▁fo
- ▁toca
- ▁manda
- ▁w
- ▁iglesia
- ▁dólares
- ▁ja
- ▁diferentes
- ista
- ▁escuela
- ▁fácil
- ▁sale
- ▁interesante
- ▁padre
- ▁gana
- ▁inter
- ▁papá
- mina
- ▁pregunta
- iente
- ▁realidad
- ▁conozco
- ▁dar
- sión
- ▁tenido
- ▁trabajar
- ▁pareja
- ▁gu
- ▁mío
- ▁hijo
- ig
- ▁vivi
- ▁computadora
- ▁visto
- ▁importante
- ▁pasado
- ▁vol
- ▁tenga
- ho
- ▁pagar
- ▁latino
- ▁corre
- ▁haber
- ▁televisión
- ▁luego
- ▁relación
- ▁señor
- ▁tanta
- ▁mujeres
- iza
- ▁treinta
- ▁idea
- ▁salir
- ▁americano
- ▁encanta
- ▁meses
- ▁pasó
- ▁programa
- ▁algún
- ▁pri
- ▁estuve
- ▁comprar
- ▁contra
- ▁bonito
- ▁colombia
- ▁compra
- ▁super
- ▁hacía
- ▁imp
- ▁cultura
- ▁fíjate
- ▁sino
- ▁poner
- ▁fuera
- ▁ri
- ▁veinte
- ▁buen
- ▁único
- ▁entiendo
- ▁depende
- ▁fu
- ▁españa
- ▁quizás
- ▁esté
- ▁gracias
- ▁hija
- tico
- ▁imagino
- q
- ▁quiera
- ▁comuni
- ▁espera
- ▁go
- ▁primera
- ▁clase
- ▁general
- ▁diciendo
- ▁carro
- ▁anda
- ▁somos
- ▁sabía
- ▁amiga
- ▁vaya
- ▁compañía
- ▁siete
- ▁viste
- ▁canadá
- ▁cuanto
- ▁empeza
- ▁mayor
- ▁lleg
- ▁ido
- ▁malo
- ▁debería
- ▁gobierno
- ▁edad
- ▁situación
- ▁trabajando
- tivo
- ▁calle
- ▁veinti
- ▁mayoría
- ▁plan
- ▁viviendo
- ▁termina
- ▁llamo
- ▁viaja
- ▁social
- ▁jo
- ▁ciento
- ▁joven
- ▁estudio
- ▁hablo
- ▁empieza
- ▁podía
- ▁baila
- ▁punto
- ▁matrimonio
- ▁primero
- ▁entiende
- ▁perdón
- ▁niña
- ▁pobre
- fect
- ▁hispano
- ▁auto
- ▁importa
- ▁tarde
- ▁vivía
- ▁gustaría
- ▁diferencia
- ▁pueda
- ▁experiencia
- ▁ángeles
- ▁pie
- ▁oportunidad
- ▁mañana
- ▁nuevo
- ▁ningún
- ▁k
- ▁razón
- ▁minutos
- vis
- ▁además
- ▁cha
- ▁nueve
- ▁comercial
- ▁demasiado
- ▁encontrar
- port
- ▁sentido
- ▁número
- ▁política
- ▁niño
- ▁grupo
- ▁pensar
- ▁hermano
- ísimo
- ▁raza
- ▁afuera
- ▁quince
- ▁sitio
- ▁policía
- ▁gusto
- ▁fuerte
- ▁miami
- ▁palabra
- ▁montón
- ▁cincuenta
- ▁falta
- ▁recuerdo
- ▁visita
- ▁normal
- ▁especialmente
- ▁hizo
- ▁salud
- ▁partido
- ▁plata
- ▁venezuela
- ▁ru
- ▁novia
- ▁cierta
- ▁educa
- ▁área
- ▁maneja
- ▁quien
- ▁acostumbra
- ▁conocí
- ▁doctor
- ▁inmigrante
- ▁básicamente
- ▁mexicano
- ▁comida
- ▁algunos
- ▁enseña
- ▁cuarenta
- ▁supuesto
- ▁panamá
- ▁religiones
- ▁cuestión
- ▁bi
- ▁final
- ▁encuentro
- ▁llevo
- ▁tenés
- ▁hermana
- ▁papel
- ▁existe
- ▁aprende
- ▁novio
- ▁encontr
- ▁cambio
- ▁negocio
- ▁atrás
- ▁podría
- ▁miedo
- ismo
- ▁increíble
- ▁pongo
- ▁aparte
- ▁osea
- ▁médico
- ▁acento
- ▁terrible
- ▁enferm
- ▁hablé
- ▁regresa
- ▁texas
- ▁jurado
- ▁última
- ▁peor
- ▁estuvo
- ▁dentro
- ▁color
- ▁viví
- ▁right
- ▁chicago
- ▁servicio
- ▁interesa
- ▁muchísimo
- ▁email
- ▁escucho
- ▁pronto
- ▁homosexual
- ▁rápido
- ▁esposa
- ▁principio
- ▁llen
- ▁hospital
- ▁imagínate
- ▁peligro
- ▁cuándo
- ▁uhum
- ▁apartamento
- ▁funciona
- ▁historia
- ▁tecnología
- ▁control
- ▁ninguna
- ▁juntos
- ▁encuentra
- ▁horrible
- ▁centro
- ▁atención
- ▁hubiera
- ▁totalmente
- ▁california
- ▁católica
- ▁molesta
- ▁gustó
- ▁información
- ▁méjico
- ▁suerte
- ▁argentino
- ▁divi
- ▁florida
- ▁guerra
- ▁aires
- ▁nieve
- ▁obviamente
- ▁pelea
- ▁nuestro
- ▁simplemente
- ▁pequeño
- ▁clima
- ▁europa
- ▁imagina
- ▁arriba
- ▁leyes
- ▁playa
- ▁violencia
- ▁conversa
- ▁fiesta
- ▁tranquilo
- ▁acepta
- ▁último
- ▁única
- ▁definitivamente
- ▁incluso
- ▁idioma
- ▁favor
- ▁blanco
- ▁presidente
- ▁invierno
- ▁separa
- ivo
- ▁primer
- ▁nuestra
- ▁bonita
- ▁culpa
- ▁vota
- ▁entendí
- ▁madre
- ▁conocido
- ▁arregl
- ▁acerca
- ▁washington
- ▁radio
- ▁opina
- ▁contigo
- ▁podemos
- ▁pensando
- ▁duro
- ▁conmigo
- ▁verano
- '0'
- ▁negro
- ▁mientras
- ▁nací
- ▁toronto
- ▁recibi
- ▁hicieron
- ▁boston
- ▁campo
- ▁repente
- ▁cocina
- ▁cuesta
- ▁conseguir
- ▁jóvenes
- ▁olvida
- ▁ochenta
- ▁nivel
- ▁sociedad
- ▁chiquito
- ▁guatemala
- ▁político
- ▁supongo
- ▁empezó
- ▁época
- ▁siquiera
- ▁agarra
- ▁católico
- ▁pennsylvania
- ▁medicina
- ▁entender
- ▁italia
- ▁especial
- ▁atlanta
- ▁navidad
- ▁cantidad
- ▁domingo
- ▁cristiano
- ▁opinión
- ▁crédito
- ▁noticias
- ▁houston
- ▁preocupa
- ▁mensaje
- ▁américa
- ▁perfecto
- ▁dijiste
- '1'
- '2'
- '5'
- _
- '-'
- '3'
- '6'
- '4'
- '9'
- '8'
- '7'
- A
- B
- ì
- à
- ç
- è
- ü
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 8k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202301'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ArseniyBolotin/bert-multi-PAD-ner
|
[
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | 2023-03-01T04:13:06Z |
---
tags:
- frozenlake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-frozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: frozenlake-v1-4x4
type: frozenlake-v1-4x4
metrics:
- type: mean_reward
value: 0.76 +/- 0.43
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **frozenlake-v1**
This is a trained model of a **Q-Learning** agent playing **frozenlake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Shilash/q-learning-frozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ArshdeepSekhon050/DialoGPT-medium-RickAndMorty
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- zh
tags:
- glm
- thudm
---
GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
Please refer to our paper for a detailed description of GLM:
[GLM: General Language Model Pretraining with Autoregressive Blank Infilling](https://arxiv.org/abs/2103.10360) (ACL 2022)
Zhengxiao Du*, Yujie Qian*, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang (*: equal contribution)
Find more examples in our [Github repo](https://github.com/THUDM/GLM).
## Model description
`glm-large-chinese` is pretrained on the [WuDaoCorpora](https://www.sciencedirect.com/science/article/pii/S2666651021000152) dataset. It has 24 transformer layers, with hidden size 1024 and 16 attention heads in each layer. The model is pretrained with autoregressive blank filling objectives designed for natural language understanding, seq2seq, and language modeling.
## How to use
Please refer the [instruction](https://github.com/THUDM/GLM#hugging-face-hub) in our Github repo.
We use three different mask tokens for different tasks: `[MASK]` for short blank filling, `[sMASK]` for sentence filling, and `[gMASK]` for left to right generation. You can find examples about different masks from [here](https://github.com/THUDM/GLM#left-to-right-generation--blank-filling-interactive). The prediction always begin with a special `<|startofpiece|>` token and ends with a `<|endofpiece|>` token.
## Citation
Please cite our paper if you find this code useful for your research:
```
@article{DBLP:conf/acl/DuQLDQY022,
author = {Zhengxiao Du and
Yujie Qian and
Xiao Liu and
Ming Ding and
Jiezhong Qiu and
Zhilin Yang and
Jie Tang},
title = {{GLM:} General Language Model Pretraining with Autoregressive Blank Infilling},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), {ACL} 2022, Dublin, Ireland,
May 22-27, 2022},
pages = {320--335},
publisher = {Association for Computational Linguistics},
year = {2022},
}
```
|
Aruden/DialoGPT-medium-harrypotterall
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ArvinZhuang/BiTAG-t5-large
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 4 | 2023-03-01T04:23:41Z |
---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ChilloutMixSF API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "chilloutmixsf"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Model link: [View model](https://stablediffusionapi.com/models/chilloutmixsf)
Credits: [View credits](https://civitai.com/?query=ChilloutMixSF)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v3/dreambooth"
payload = json.dumps({
"key": "",
"model_id": "chilloutmixsf",
"prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
AshiNLP/Bert_model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T04:35:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 8.01 +/- 2.48
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Shilash/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
AshtonBenson/DialoGPT-small-quentin
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T04:43:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mic
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: mic
type: mic
config: MIC
split: test
args: MIC
metrics:
- name: Accuracy
type: accuracy
value: 0.685478199718706
- name: F1
type: f1
value: 0.6010172314630763
- name: Precision
type: precision
value: 0.6053034619227594
- name: Recall
type: recall
value: 0.5973741698626668
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the mic dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0634
- Accuracy: 0.6855
- F1: 0.6010
- Precision: 0.6053
- Recall: 0.5974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.7221 | 1.0 | 5689 | 0.7082 | 0.6943 | 0.5470 | 0.6322 | 0.5336 |
| 0.6334 | 2.0 | 11378 | 0.7100 | 0.7048 | 0.5810 | 0.6452 | 0.5573 |
| 0.5185 | 3.0 | 17067 | 0.7709 | 0.6968 | 0.6057 | 0.6162 | 0.6001 |
| 0.3962 | 4.0 | 22756 | 0.8961 | 0.6881 | 0.6050 | 0.6091 | 0.6014 |
| 0.2962 | 5.0 | 28445 | 1.0634 | 0.6855 | 0.6010 | 0.6053 | 0.5974 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.0
- Tokenizers 0.12.1
|
Atlasky/Turkish-Negator
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- zh
license: apache-2.0
tags:
- chinese poem
- 中文
- 写诗
- 唐诗
- 宋词
widget:
- text: "作诗:百花</s>模仿:李清照"
---
# 一个好玩的中文AI写诗模型V2
- V1 2022 check https://huggingface.co/hululuzhu/chinese-poem-t5-mengzi-finetune
- 两种模式仿写唐宋古诗
- 无特定风格输入格式 `作诗:您的标题`,比如 `作诗:秋思`
- 无特定风格输入格式 `作诗:您的标题</s>模仿:唐宋诗人名字`,比如 `作诗:秋思</s>模仿:李清照`
- 如果你想尝试
- 如果自己有GPU环境,可以参考我放在huggingface的[示例代码](https://huggingface.co/hululuzhu/chinese-poem-t5-mengzi-finetune#%E8%BF%90%E8%A1%8C%E4%BB%A3%E7%A0%81%E7%A4%BA%E4%BE%8B)
- 或者使用Google Colab,用这个[简化版colab](https://colab.research.google.com/github/hululuzhu/chinese-ai-writing-share/blob/main/inference/2022_simple_poem_inference_huggingface.ipynb)来玩我的T5写诗模型
- 训练代码请参考[我的github链接](https://github.com/hululuzhu/chinese-ai-writing-share)
- 如果想了解一些背景和讨论,可以看我的[slides](https://github.com/hululuzhu/chinese-ai-writing-share/tree/main/slides)
## 架构
- 预训练使用 [澜舟科技的孟子 T5](https://huggingface.co/Langboat/mengzi-t5-base)
- 我训练了5个epoch
## 数据来源
- 唐诗宋词 https://github.com/chinese-poetry/chinese-poetry
- 2023 T5 方案考虑了 `标题 -> 诗歌`,或者 `标题+诗人 -> 诗歌`
- 标题长度限制12token,诗人4token,诗歌32token,结尾用句号,具体参考training下面的notebook
## 语言支持
- 默认简体中文
- 2023 T5 inference 支持繁体中文,需要标记 `is_input_traditional_chinese=True`
- 如需要训练繁体中文模型,查找`chinese_converter.to_simplified`改为`chinese_converter.to_traditional`
## 训练
- 我是用 Google Colab Pro(9.99!)
- T5因为使用simplet5 (pytorch + huggingface 的一个封装),所以使用GPU A100训练,模型训练时间~2小时
## 运行代码示例
```python
# 安装以下2个包方便文字处理和模型生成
# !pip install -q simplet5
# !pip install -q chinese-converter
# 具体代码
import torch
from simplet5 import SimpleT5
from transformers import T5Tokenizer, T5ForConditionalGeneration
import chinese_converter
MODELS = {
# id: (hf_path, max_len)
'2022-v1': ("hululuzhu/chinese-poem-t5-mengzi-finetune", 64),
'2023-v2': ("hululuzhu/chinese-poem-t5-v2", 32)
}
MODEL_VERSION = '2023-v2' # @param ["2023-v2", "2022-v1"]
# Huggingface model card
MODEL_PATH = MODELS[MODEL_VERSION][0]
class PoemModel(SimpleT5):
def __init__(self) -> None:
super().__init__()
self.device = torch.device("cuda")
def load_my_model(self):
self.tokenizer = T5Tokenizer.from_pretrained(MODEL_PATH)
self.model = T5ForConditionalGeneration.from_pretrained(MODEL_PATH)
AUTHOR_PROMPT = "模仿:"
TITLE_PROMPT = "作诗:"
EOS_TOKEN = '</s>'
poem_model = PoemModel()
poem_model.load_my_model()
poem_model.model = poem_model.model.to('cuda')
MAX_AUTHOR_CHAR = 4
MAX_TITLE_CHAR = 12
MIN_CONTENT_CHAR = 10
MAX_CONTENT_CHAR = MODELS[MODEL_VERSION][1]
def poem(title_str, opt_author=None, model=poem_model,
is_input_traditional_chinese=False,
num_beams=2):
model.model = model.model.to('cuda')
if opt_author:
in_request = TITLE_PROMPT + title_str[:MAX_TITLE_CHAR] + EOS_TOKEN + AUTHOR_PROMPT + opt_author[:MAX_AUTHOR_CHAR]
else:
in_request = TITLE_PROMPT + title_str[:MAX_TITLE_CHAR]
if is_input_traditional_chinese:
in_request = chinese_converter.to_simplified(in_request)
out = model.predict(in_request,
max_length=MAX_CONTENT_CHAR,
num_beams=num_beams)[0].replace(",", ",")
if is_input_traditional_chinese:
out = chinese_converter.to_traditional(out)
print(f"標題: {in_request.replace('</s>', ' ')}\n詩歌: {out}")
else:
print(f"标题: {in_request.replace('</s>', ' ')}\n诗歌: {out}")
```
## 简体中文示例
```
for title in ['秋思', "百花", '佳人有约']:
# Empty author means general style
for author in ['', "杜甫", "李白", "李清照", "苏轼"]:
poem(title, author)
print()
标题: 作诗:秋思
诗歌: 秋风吹我衣,落叶满庭除。老去心更苦,愁来鬓已疎。
标题: 作诗:秋思 模仿:杜甫
诗歌: 秋风吹我衣,落叶满庭除。客子思乡泪,故人伤远书。
标题: 作诗:秋思 模仿:李白
诗歌: 秋风吹我衣,飒飒满庭树。忆得故园花,今朝已零落。
标题: 作诗:秋思 模仿:李清照
诗歌: 秋风吹我衣,落叶满庭除。天高鸿雁少,日短萤火疎。
标题: 作诗:秋思 模仿:苏轼
诗歌: 秋风吹我衣,飒飒吹我衣。出门无所诣,但觉天宇低。
标题: 作诗:百花
诗歌: 百花头上开,春色爲谁来。欲识春风面,先教花上开。
标题: 作诗:百花 模仿:杜甫
诗歌: 百花开尽见春归,红紫纷纷照眼稀。莫道花时无赏处,且留樽酒对芳菲。
标题: 作诗:百花 模仿:李白
诗歌: 百花开尽见春归,谁把芳菲比玉池。若使东君无别意,春风应解惜花枝。
标题: 作诗:百花 模仿:李清照
诗歌: 百花头上开,春色爲谁来。欲识春风面,先教桃李开。
标题: 作诗:百花 模仿:苏轼
诗歌: 百花头上开,百草头边出。春风吹不断,尽逐东风去。
标题: 作诗:佳人有约
诗歌: 佳人有约在烟汀,相约花前共醉醒。莫道人间春色晚,隔帘应是笑谈声。
标题: 作诗:佳人有约 模仿:杜甫
诗歌: 佳人有约在江干,万里相随入夢寒。玉笛夜吹明月下,金杯春泛水晶盘。
标题: 作诗:佳人有约 模仿:李白
诗歌: 佳人有约在烟汀,玉颜金面映红英。天边月下吹笙处,疑是瑶池旧主人。
标题: 作诗:佳人有约 模仿:李清照
诗歌: 佳人有约在烟汀,相约花前共醉醒。莫道春来无约到,隔帘应是月中听。
标题: 作诗:佳人有约 模仿:苏轼
诗歌: 佳人有约在烟汀,玉佩金鱼照碧浔。应是仙家好风景,夜来花下弄潺湲。
# Try different beams
for title in ['冬雪']:
for author in ['', "杜甫"]:
for num_beams in (2, 3, 5, 10, 20, 50, 100, 200):
print(f"num beams: {num_beams}")
poem(title, author, num_beams=num_beams)
print("-"*80)
num beams: 2
标题: 作诗:冬雪
诗歌: 冬雪未全消,春寒犹未回。山空云气重,天阔水光开。
num beams: 3
标题: 作诗:冬雪
诗歌: 冬雪未成雪,春寒犹未回。山空云气重,江阔水声来。
num beams: 5
标题: 作诗:冬雪
诗歌: 冬雪未全消,春寒犹未回。山空云气重,江阔水声来。
num beams: 10
标题: 作诗:冬雪
诗歌: 冬雪未成雪,北风先着人。寒威欺病骨,老色逼衰身。
num beams: 20
标题: 作诗:冬雪
诗歌: 冬雪未成雪,北风先作威。山高云气重,江阔水声微。
num beams: 50
标题: 作诗:冬雪
诗歌: 冻云凝不散,寒日淡无光。夜半风号屋,朝来雪满堂。
num beams: 100
标题: 作诗:冬雪
诗歌: 朔风吹雪满山城,万壑千岩冻不鸣。夜半忽闻檐溜响,晓来还见瓦沟平。
num beams: 200
标题: 作诗:冬雪
诗歌: 去年冬雪未全消,今岁春冰犹未消。山色不随人意改,江声长送雁声遥。
--------------------------------------------------------------------------------
num beams: 2
标题: 作诗:冬雪 模仿:杜甫
诗歌: 冬雪未全消,春寒犹未销。山城迷远道,江路入重霄。
num beams: 3
标题: 作诗:冬雪 模仿:杜甫
诗歌: 冬雪未全落,春寒犹未回。江城风日好,山寺雨声来。
num beams: 5
标题: 作诗:冬雪 模仿:杜甫
诗歌: 冬雪未成雪,春寒犹着人。江天无定色,风日有微尘。
num beams: 10
标题: 作诗:冬雪 模仿:杜甫
诗歌: 朔风吹雪满江城,客子衣裘不自温。夜半忽闻檐溜响,晓来还见瓦沟浑。
num beams: 20
标题: 作诗:冬雪 模仿:杜甫
诗歌: 朔风吹雪满江城,万木号风急霰声。老去不知身是客,乱来唯觉鬓成丝。
num beams: 50
标题: 作诗:冬雪 模仿:杜甫
诗歌: 朔风吹雪满江城,万壑千岩冻未平。夜半忽惊飞霰急,晓来还作打窗声。
num beams: 100
标题: 作诗:冬雪 模仿:杜甫
诗歌: 朔风吹雪满江城,万壑千岩冻未平。夜半忽惊飞霰急,晓来还作打窗声。
num beams: 200
标题: 作诗:冬雪 模仿:杜甫
诗歌: 朔风吹雪满江城,万壑千岩冻未平。夜半忽惊飞霰急,晓来还作打窗声。
```
# 繁体中文
```
for title in ['春節', "中秋", "春秋战国"]:
# Empty author means general style
for author in ['', "杜甫", "李白", "李清照", "蘇軾"]:
poem(title, author, is_input_traditional_chinese=True)
print()
標題: 作诗:春节
詩歌: 節物今朝是,年光此際同。年華驚歲換,身世逐時窮。
標題: 作诗:春节 模仿:杜甫
詩歌: 節物今朝是,春光此夜同。江城聞鼓角,野寺見燒紅。
標題: 作诗:春节 模仿:李白
詩歌: 節物今朝是,春光此夜同。柳條初弄色,梅蕊未藏紅。
標題: 作诗:春节 模仿:李清照
詩歌: 節物今朝是,年光此際同。柳條新染綠,梅蕊未藏紅。
標題: 作诗:春节 模仿:苏轼
詩歌: 節物今朝是,春光此夜同。老來多感事,老去少知功。
標題: 作诗:中秋
詩歌: 秋色今宵半,江天此夜深。雲收山吐月,風送水浮金。
標題: 作诗:中秋 模仿:杜甫
詩歌: 秋色今宵盡,江天萬里長。雲收山氣白,風送水聲涼。
標題: 作诗:中秋 模仿:李白
詩歌: 月色秋來好,人言此夜奇。桂華清似水,桂魄冷於泥。
標題: 作诗:中秋 模仿:李清照
詩歌: 秋色今宵半,清光此夜分。月從天上出,人向世間聞。
標題: 作诗:中秋 模仿:苏轼
詩歌: 中秋月色好,況復是中秋。露重珠猶溼,風高葉未收。
標題: 作诗:春秋战国
詩歌: 國破人亡國亦亡,君王何事獨稱王。當時若使無張許,誰信賢哉是魯王。
標題: 作诗:春秋战国 模仿:杜甫
詩歌: 國破人亡國亦亡,君王何事獨稱王。當時若使無張許,天下安知有範滂。
標題: 作诗:春秋战国 模仿:李白
詩歌: 吳越爭雄勢已分,君王何事更相君。若教國士輕天下,肯信賢人有異聞。
標題: 作诗:春秋战国 模仿:李清照
詩歌: 東門西去是通津,誰信君王不識真。若使魯儒輕國士,肯教吳客作諸侯。
標題: 作诗:春秋战国 模仿:苏轼
詩歌: 天下兵戈尚未休,豈知今日是良謀。君王若問當時事,不道今朝有許愁。
```
|
Augustvember/WokkaBot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T05:42:03Z |
---
license: bsd-3-clause
---
# CodeGen (CodeGen-NL 16B)
## Sharded version of codegen
This model was sharded using torch.float16. Use the code below to load this model, configure the device_map for your GPU/CPU split.
First pull the model.
```bash
git clone https://huggingface.co/abacaj/codegen-16B-nl-sharded
cd codegen-16B-nl-sharded
git-lfs install
git pull
```
```python
def load_model_sharded():
config = AutoConfig.from_pretrained("abacaj/codegen-16B-nl-sharded")
tokenizer = AutoTokenizer.from_pretrained("abacaj/codegen-16B-nl-sharded")
with init_empty_weights():
model = AutoModelForCausalLM.from_config(config)
device_map = infer_auto_device_map(
model,
max_memory={
0: "20GiB",
"cpu": "110GiB",
},
dtype=torch.float16,
no_split_module_classes=["CodeGenBlock"])
model = load_checkpoint_and_dispatch(
model,
dtype=torch.float16,
checkpoint="codegen-16B-nl-sharded",
device_map=device_map,
).eval()
return model, tokenizer
```
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-NL 16B** in the paper, where "NL" means it is pre-trained on the Pile and "16B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-NL 16B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-nl")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-16B-nl")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
Augustvember/WokkaBot2
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T05:42:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 400 | 3.4608 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Augustvember/WokkaBot6
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T06:01:50Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: mojoee/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Augustvember/wokka2
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | 2023-03-01T06:19:35Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- KoddaDuck/autotrain-data-text-summa
co2_eq_emissions:
emissions: 0.00490034117291842
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 38210101164
- CO2 Emissions (in grams): 0.0049
## Validation Metrics
- Loss: 2.370
- Rouge1: 28.928
- Rouge2: 11.010
- RougeL: 21.951
- RougeLsum: 22.232
- Gen Len: 15.900
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/KoddaDuck/autotrain-text-summa-38210101164
```
|
Awsaf/large-eren
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Vi-test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vi-test1
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Axon/resnet18-v1
|
[
"dataset:ImageNet",
"arxiv:1512.03385",
"Axon",
"Elixir",
"license:apache-2.0"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.94
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1123
- Accuracy: 0.94
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0212 | 1.0 | 318 | 0.6302 | 0.7303 |
| 0.4836 | 2.0 | 636 | 0.2955 | 0.8765 |
| 0.2603 | 3.0 | 954 | 0.1814 | 0.9184 |
| 0.1795 | 4.0 | 1272 | 0.1439 | 0.9294 |
| 0.1464 | 5.0 | 1590 | 0.1294 | 0.9348 |
| 0.1312 | 6.0 | 1908 | 0.1213 | 0.94 |
| 0.1218 | 7.0 | 2226 | 0.1171 | 0.9390 |
| 0.1163 | 8.0 | 2544 | 0.1144 | 0.9403 |
| 0.113 | 9.0 | 2862 | 0.1128 | 0.94 |
| 0.1118 | 10.0 | 3180 | 0.1123 | 0.94 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Ayato/DialoGTP-large-Yuri
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-finetuned-ner_0301_J_DATA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner_0301_J_DATA
This model is a fine-tuned version of [ckiplab/bert-base-chinese-ner](https://huggingface.co/ckiplab/bert-base-chinese-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0764
- Precision: 0.9663
- Recall: 0.9708
- F1: 0.9685
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3872 | 1.0 | 705 | 0.1222 | 0.9088 | 0.9311 | 0.9198 | 0.9781 |
| 0.0732 | 2.0 | 1410 | 0.0642 | 0.9303 | 0.9509 | 0.9405 | 0.9900 |
| 0.034 | 3.0 | 2115 | 0.0588 | 0.9616 | 0.9661 | 0.9639 | 0.9909 |
| 0.0267 | 4.0 | 2820 | 0.0631 | 0.9639 | 0.9673 | 0.9656 | 0.9925 |
| 0.0232 | 5.0 | 3525 | 0.0617 | 0.9630 | 0.9720 | 0.9674 | 0.9924 |
| 0.017 | 6.0 | 4230 | 0.0652 | 0.9674 | 0.9708 | 0.9691 | 0.9926 |
| 0.0123 | 7.0 | 4935 | 0.0573 | 0.9618 | 0.9720 | 0.9669 | 0.9923 |
| 0.009 | 8.0 | 5640 | 0.0667 | 0.9651 | 0.9696 | 0.9674 | 0.9922 |
| 0.0055 | 9.0 | 6345 | 0.0768 | 0.9640 | 0.9696 | 0.9668 | 0.9925 |
| 0.0045 | 10.0 | 7050 | 0.0775 | 0.9662 | 0.9696 | 0.9679 | 0.9925 |
| 0.004 | 11.0 | 7755 | 0.0753 | 0.9606 | 0.9685 | 0.9645 | 0.9923 |
| 0.0018 | 12.0 | 8460 | 0.0735 | 0.9629 | 0.9696 | 0.9662 | 0.9925 |
| 0.0019 | 13.0 | 9165 | 0.0754 | 0.9663 | 0.9708 | 0.9685 | 0.9927 |
| 0.0019 | 14.0 | 9870 | 0.0760 | 0.9651 | 0.9696 | 0.9674 | 0.9925 |
| 0.0013 | 15.0 | 10575 | 0.0764 | 0.9663 | 0.9708 | 0.9685 | 0.9925 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.13.0+cu117
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Ayham/bert_gpt2_summarization_xsum
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:xsum",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dp66/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Ayham/bert_roberta_summarization_cnn_dailymail
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
thumbnail: https://i.imgur.com/cZMzjI6.png
license: creativeml-openrail-m
datasets:
- Korakoe/OpenNiji-V2-Dataset
language:
- en
pipeline_tag: text-to-image
tags:
- OpenNiji
- Stable Diffusion
- Anime
- Niji
- Nijijourney
- Stylised
---

# OpenNiji-V2
The **NEW** Stable Diffusion model trained on **180k** Nijijourney images!
## Acknowledgements
- [SD-Silicon - Xynon](https://huggingface.co/Xynon/SD-Silicon)
- [Nijijourney - Spellbrush](https://nijijourney.com/en/)
- [Kohya Trainer - bmaltais](https://github.com/bmaltais/kohya_ss)
## Results

```
1girl, eyes closed, slight smile, underwater, water bubbles, reflection, long light brown hair, bloom, depth of field, bokeh
```

```
masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewellery, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt
```

```
1girl, looking at viewer, (highly detailed), (realistic), reflections (transparent) iridescent opaque jacket, long transparent iridescent hair, bloom, depth of field, bokeh, cinematic lighting, dynamic pose, (full body), ((ultra realistic perfect face))
```
## Dataset
Due to the size of the dataset (and how it was scraped) there are quite a few broken images and images without prompts, we plan to resolve these issues and release the dataset eventually. The dataset is 196 GB in size and took 1 night of scraping all the nijijourney image generation channels.
## Small Note
This model already has the in01 trick applied, so this model should be better at generating hands!
- (This is not going to work 100% of the time, and manual hand fixes may be required)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Ayham/bertgpt2_cnn
|
[
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: odahl/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BSC-LT/roberta-base-biomedical-clinical-es
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"arxiv:2109.03570",
"arxiv:2109.07765",
"transformers",
"biomedical",
"clinical",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 27 | null |
---
license: openrail++
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---

<sub>studio photo closeup portrait victorian (woman1-420:1.3) with blue eyes and red hair wearing intricate silver metal crystal medieval armour (sitting inside a castle:1.3), black victorian attire, rembrandt light, zbrush, (black background:1.7), glossy, rtx, reflections, soft light, soft shadows, dramatic lighting, atmospheric, global illumination, unreal, octane, (two tone lighting:1.5), (cyan light:1.4), alphonse mucha, bokeh
Negative prompt: nfixernext, nfixer, nfixernext, nfixer, nfixernext, nfixer,hands, arms, illustration, fake, cgi, drawing, miniature, blocky, angular, glasses, (large eyes:1.3), freckles, face paint, mask, glasses, tattoos
Steps: 120, Sampler: Euler a, CFG scale: 3, Seed: 201306749, Size: 1024x1024, Model hash: 639d0db70f, Denoising strength: 0.3, ENSD: 3, Mask blur: 4, SD upscale overlap: 64, SD upscale upscaler: LDSR</sub>
# Illuminati Diffusion v1.0
Illuminati Diffusion is a latent text-to-image diffusion model that has been conditioned on high aesthetic synthetic images through fine-tuning. It was trained on 82,000 images locally on my PC with a single 3090ti, taking over 100 hours.
- [Illuminati Diffusion v1.0 Safetensors](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/blob/main/illuminati_diffusion_v1.0.safetensors): The model file.
- [Illuminati Diffusion v1.0 Inference Config](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/raw/main/illuminati_diffusion_v1.0.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase. (right click > save target as/link as)
- [Illuminati Diffusion v1.0 supplementary TI embeddings](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/tree/main/embeds): A series of both positive and negative embeds. nfixer is recommended for all gens.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Note - Hosted inference API does not work as my model uses safetensors in the diffusers model, it doesn't appear to be compatible with Hugging Face's API, however the model does work correctly for any software which supports this.
If you enjoy this model, perhaps you might consider supporting me on [Patreon ](https://patreon.com/user?u=55366974).
[](https://patreon.com/user?u=55366974)
In order to reach us, you can join our [Discord server](https://discord.gg/HqdffGgeBa).
[](https://discord.gg/HqdffGgeBa)
Follow me on my [Twitter page](https://twitter.com/cac0e).
|
BSC-LT/roberta-base-bne
|
[
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:bne",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"license:apache-2.0",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 594 | 2023-03-01T09:29:40Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BSC-LT/roberta-large-bne-sqac
|
[
"pytorch",
"roberta",
"question-answering",
"es",
"dataset:BSC-TeMU/SQAC",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"qa",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: RL3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ruescog/RL3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BSen/wav2vec2-base-timit-demo-colab
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
language:
- vi
metrics:
- f1
pipeline_tag: token-classification
tags:
- transformer
- vietnamese
- nlp
- bert
- deberta
- deberta-v3
---
# ViDeBERTa: A powerful pre-trained language model for Vietnamese
ViDeBERTa, a new pre-trained monolingual language model for Vietnamese,
with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and ViDeBERTa_large,
which are pre-trained on 138GB of Vietnamese text of high-quality and diverse Vietnamese text using DeBERTaV3 architecture.
Please check the [official repository][github] for more implementation details and updates
The DeBERTa V3 xsmall model comes with 12 layers and a hidden size
of 384. It has only 22M backbone parameters with a vocabulary
containing 128K tokens which introduces 48M parameters in the
Embedding layer. This model was trained using CC100 dataset, which consists of 138 GB of Vietnamese text.
## Fine-tuning on NLU tasks
We present the dev results on VLSP POS, PhoNER, ViQuAD dataset.
| Model|#Params(M)| POS | NER | MRC |
|-----------|-------|---------|-----|----------|
| XLM-R-base | 125M | 96.2 | - | 82.0 |
| XLM-R-large | 355M | 96.3 | 93.8 | 87.0 |
| PhoBERT-base | 135M | 96.7 | 80.1 |
| PhoBERT-large | 370M | 96.8 | 83.5 |
| ViT5-base | 310M | - | 94.5 | - |
| ViT5-large | 866M | - | 93.8 | - |
| **ViDeBERTa-xsmall** | **22M** | **96.4** | **93.6** | **81.3** |
| ViDeBERTa-base | 86M | 96.8 | 94.5 | 85.7 |
| ViDeBERTa-large | 304M | 97.2 | 95.3 | 89.9 |
## Citation
If you find ViDeBERTa useful for your work, please cite the following papers:
```latex
@article{dao2023videberta,
title={ViDeBERTa: A powerful pre-trained language model for Vietnamese},
author={Dao Tran, Cong and Pham, Nhut Huy and Nguyen, Anh and Son Hy, Truong and Vu, Tu},
journal={arXiv e-prints},
pages={arXiv--2301},
year={2023}
}
```
[github]: https://github.com/HySonLab/ViDeBERTa
|
BW/TEST
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
library_name: stable-baselines3
tags:
- assembly-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: assembly-v2
type: assembly-v2
metrics:
- type: mean_reward
value: 1785.32 +/- 219.21
name: mean_reward
verified: false
---
# **SAC** Agent playing **assembly-v2**
This is a trained model of a **SAC** agent playing **assembly-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo sac --env assembly-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo sac --env assembly-v2 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo sac --env assembly-v2 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo sac --env assembly-v2 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo sac --env assembly-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo sac --env assembly-v2 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 500),
('buffer_size', 500000),
('ent_coef', 'auto'),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.0003),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(log_std_init=-3, net_arch=[256, 256])'),
('tau', 0.005),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
Bagus/SER-LSSED
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: mxbonn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BalajiSathesh/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-03-01T10:00:06Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: ja
datasets:
- lmqg/qg_jaquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。"
example_title: "Question Answering Example 1"
- text: "question: 1968年に開催されたオリンピックの名前は何ですか?, context: オリンピックが世界的大イベントに成長するに従って政治に左右されるようになると、1968年のメキシコシティ大会では黒人差別を訴える場と化し、1972年のミュンヘン大会ではアラブのゲリラによるイスラエル選手に対するテロ事件まで起きた(ミュンヘンオリンピック事件)。1976年のモントリオール大会になると、ニュージーランドのラグビーチームの南アフリカ遠征に反対してアフリカの諸国22ヶ国がボイコットを行った。そして、1980年のモスクワ大会ではソ連のアフガニスタン侵攻に反発したアメリカ・西ドイツ・日本などの西側諸国が相次いでボイコットを行った。1984年ロサンゼルス大会ではソ連と東側諸国が報復ボイコットを行ない、参加したのはソ連と対立していた中国とルーマニアだけだった。中でも、イラン革命後のイラン・イスラム共和国はモスクワとロサンゼルス双方のオリンピックをボイコットしている。オリンピックが巨大化するに従って財政負担の増大が大きな問題となり、1976年の夏季大会では大幅な赤字を出し、その後夏季・冬季とも立候補都市が1〜2都市だけという状態が続いた。"
example_title: "Question Answering Example 2"
model-index:
- name: lmqg/mt5-small-jaquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_jaquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 0.0
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 63.77
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 49.75
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 96.29
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 88.92
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 65.7
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 65.7
---
# Model Card of `lmqg/mt5-small-jaquad-qa`
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question answering task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
- **Language:** ja
- **Training data:** [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="ja", model="lmqg/mt5-small-jaquad-qa")
# model prediction
answers = model.answer_q(list_question="新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?", list_context=" 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-small-jaquad-qa")
output = pipe("question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現都営地下鉄新宿線、以下新宿線と表記する)乗入構想により、京王線の利用客増加が見込まれ、相当数の車両を準備する必要に迫られるなか、製造費用、保守費用を抑えた新型車両として6000系が構想された。新宿線建設に際してはすでに1号線(後の浅草線)を1,435mm軌間で開業させていた東京都は京成電鉄と1号線との乗り入れにあたり京成電鉄の路線を1,372mmから1,435mmに改軌させた事例や、1,372mm軌間の特殊性から運輸省(当時、2001年から国土交通省)と共に京王にも改軌を求めたが、改軌工事中の輸送力確保が困難なことを理由に改軌しないことで決着している。")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-jaquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_jaquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 65.7 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| AnswerF1Score | 65.7 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| BERTScore | 96.29 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_1 | 61.42 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_2 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_3 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| Bleu_4 | 0 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| METEOR | 49.75 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| MoverScore | 88.92 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
| ROUGE_L | 63.77 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_jaquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: google/mt5-small
- max_length: 512
- max_length_output: 32
- epoch: 14
- batch: 16
- lr: 0.0006
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-jaquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Banshee/LukeSkywalker
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: SarvasvaK/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Banshee/dialoGPT-small-luke
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sarwar242/autotrain-data-fake-reviews-labelling
co2_eq_emissions:
emissions: 0.012510004345691475
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 37433101195
- CO2 Emissions (in grams): 0.0125
## Validation Metrics
- Loss: 0.204
- Accuracy: 0.941
- Precision: 0.975
- Recall: 0.905
- AUC: 0.992
- F1: 0.939
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sarwar242/autotrain-fake-reviews-labelling-37433101195
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sarwar242/autotrain-fake-reviews-labelling-37433101195", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sarwar242/autotrain-fake-reviews-labelling-37433101195", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Barleysack/AERoberta
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5252216970032684
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8586
- Matthews Correlation: 0.5252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5293 | 1.0 | 535 | 0.5075 | 0.4325 |
| 0.3471 | 2.0 | 1070 | 0.5048 | 0.5060 |
| 0.2349 | 3.0 | 1605 | 0.5762 | 0.4979 |
| 0.1829 | 4.0 | 2140 | 0.7848 | 0.5093 |
| 0.1343 | 5.0 | 2675 | 0.8586 | 0.5252 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Barleysack/klue-roberta-LSTM
|
[
"pytorch",
"roberta",
"transformers"
] | null |
{
"architectures": [
"QAWithLSTMModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
license: apache-2.0
language:
- fr
tags:
- flan-t5
- qa
- lfqa
- information retrieval
datasets:
- vblagoje/lfqa
metrics:
- rouge
model-index:
- name: flan-t5-large-lfqa-fr
results: []
widget:
- text: >-
question: Comment fonctionne un modèle de langue ? Que signifi un modèle
de question réponse générative ? context : Les modèles de langage basés
sur le deep learning sont des modèles dapprentissage automatique qui
utilisent des techniques dapprentissage profond pour effectuer des tâches
de langage.En traitement automatique des langues, un modèle de langage est
un modèle statistique qui modélise la distribution de séquences de mots,
plus généralement de séquences de symboles discrets (lettres, phonèmes,
mots), dans une langue naturelle. Un modèle de langage peut par exemple
prédire le mot suivant une séquence de mots1.BERT, GPT-3 et Bloom sont des
modèles de langage.Les modèles de Question Réponse (QA) permette
d'automatiser la réponse aux questions fréquemment posées en utilisant une
base de connaissances (documents) comme contexte. Les réponses aux
questions des clients peuvent être tirées de ces documents.Il existe
différentes variantes de modèle de question réponse : question réponse
extractive : le modèle extrait la réponse d'un contexte. Le contexte ici
peut être un texte fourni, un tableau ou même du HTML ! Ceci est
généralement résolu avec des modèles de type BERT. question réponse
générative ouverte : le modèle génère du texte libre directement en
fonction du contexte. question réponse générative fermée : dans ce cas,
aucun contexte n'est fourni. La réponse est entièrement générée par un
modèle.Les modèles de langage basés sur le deep learning sont des modèles
dapprentissage automatique qui utilisent des techniques dapprentissage
profond pour effectuer des tâches de langage.En traitement automatique des
langues, un modèle de langage est un modèle statistique qui modélise la
distribution de séquences de mots, plus généralement de séquences de
symboles discrets (lettres, phonèmes, mots), dans une langue naturelle. Un
modèle de langage peut par exemple prédire le mot suivant une séquence de
mots.Les modèles de Question Réponse (QA) permette d'automatiser la
réponse aux questions fréquemment posées en utilisant une base de
connaissances (documents) comme contexte. Les réponses aux questions des
clients peuvent être tirées de ces documents.Il existe différentes
variantes de modèle de question réponse : question réponse extractive : le
modèle extrait la réponse d'un contexte. Le contexte ici peut être un
texte fourni, un tableau ou même du HTML ! Ceci est généralement résolu
avec des modèles de type BERT. question réponse générative ouverte : le
modèle génère du texte libre directement en fonction du contexte. question
réponse générative fermée : dans ce cas, aucun contexte n'est fourni. La
réponse est entièrement générée par un modèle.
example_title: Les modèles de langage
inference:
parameters:
max_length: 512
num_return_sequences: 1
min_length: 4
no_repeat_ngram_size: 4
do_sample: false
num_beams: 4
early_stopping: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-lfqa-fr
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on some examples (9000) of the vblagoje/lfqa dataset translated automatically to French using Helsinki-NLP/opus-mt-en-fr model.
Therefore the main task this model can perform is abstractive question answering given certain context paragraphs which can be used to answer that question.
- Loss: 2.7898
- Rouge1: 13.0836
- Rouge2: 1.9068
- Rougel: 10.8143
- Rougelsum: 10.6348
- Gen Len: 117.522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Usage
```python
from transformers import AutoTokenizer, AutoModel, AutoModelForSeq2SeqLM
model_name = "hmahmoud/flan-t5-large-lfqa-fr"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
query = "Comment fonctionne un modèle de langue ? Que signifi un modèle de question réponse générative ?"
document = "Les modèles de langage basés sur le deep learning sont des modèles dapprentissage automatique qui utilisent des techniques dapprentissage profond pour effectuer des tâches de langage.En traitement automatique des langues, un modèle de langage est un modèle statistique qui modélise la distribution de séquences de mots, plus généralement de séquences de symboles discrets (lettres, phonèmes, mots), dans une langue naturelle. Un modèle de langage peut par exemple prédire le mot suivant une séquence de mots1.BERT, GPT-3 et Bloom sont des modèles de langage.Les modèles de Question Réponse (QA) permette d'automatiser la réponse aux questions fréquemment posées en utilisant une base de connaissances (documents) comme contexte. Les réponses aux questions des clients peuvent être tirées de ces documents.Il existe différentes variantes de modèle de question réponse : question réponse extractive : le modèle extrait la réponse d'un contexte. Le contexte ici peut être un texte fourni, un tableau ou même du HTML ! Ceci est généralement résolu avec des modèles de type BERT. question réponse générative ouverte : le modèle génère du texte libre directement en fonction du contexte. question réponse générative fermée : dans ce cas, aucun contexte n'est fourni. La réponse est entièrement générée par un modèle.Les modèles de langage basés sur le deep learning sont des modèles dapprentissage automatique qui utilisent des techniques dapprentissage profond pour effectuer des tâches de langage.En traitement automatique des langues, un modèle de langage est un modèle statistique qui modélise la distribution de séquences de mots, plus généralement de séquences de symboles discrets (lettres, phonèmes, mots), dans une langue naturelle. Un modèle de langage peut par exemple prédire le mot suivant une séquence de mots.Les modèles de Question Réponse (QA) permette d'automatiser la réponse aux questions fréquemment posées en utilisant une base de connaissances (documents) comme contexte. Les réponses aux questions des clients peuvent être tirées de ces documents.Il existe différentes variantes de modèle de question réponse : question réponse extractive : le modèle extrait la réponse d'un contexte. Le contexte ici peut être un texte fourni, un tableau ou même du HTML ! Ceci est généralement résolu avec des modèles de type BERT. question réponse générative ouverte : le modèle génère du texte libre directement en fonction du contexte. question réponse générative fermée : dans ce cas, aucun contexte n'est fourni. La réponse est entièrement générée par un modèle."
query_and_docs = "question: {} context: {}".format(query, document)
model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device),
attention_mask=model_input["attention_mask"].to(device),
min_length=4,
max_length=512,
do_sample=False,
early_stopping=True,
num_beams=4,
temperature=None,
top_k=None,
top_p=None,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
num_return_sequences=1)
tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True,clean_up_tokenization_spaces=True)
```
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Battlehooks/distilbert-base-uncased-finetuned-squad
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8651001731387583
- name: F1
type: f1
value: 0.8160291438979962
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2](https://huggingface.co/gokuls/bert_12_layer_model_v2) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3129
- Accuracy: 0.8651
- F1: 0.8160
- Combined Score: 0.8406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.4179 | 1.0 | 1422 | 0.3830 | 0.8252 | 0.7916 | 0.8084 |
| 0.2978 | 2.0 | 2844 | 0.3507 | 0.8357 | 0.7906 | 0.8131 |
| 0.2318 | 3.0 | 4266 | 0.3129 | 0.8651 | 0.8160 | 0.8406 |
| 0.1765 | 4.0 | 5688 | 0.3540 | 0.8700 | 0.8328 | 0.8514 |
| 0.1305 | 5.0 | 7110 | 0.4276 | 0.8734 | 0.8267 | 0.8500 |
| 0.1003 | 6.0 | 8532 | 0.4078 | 0.8748 | 0.8292 | 0.8520 |
| 0.0788 | 7.0 | 9954 | 0.4069 | 0.8767 | 0.8345 | 0.8556 |
| 0.0625 | 8.0 | 11376 | 0.4723 | 0.8760 | 0.8322 | 0.8541 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BeIR/sparta-msmarco-distilbert-base-v1
|
[
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:2009.13013",
"arxiv:2104.08663",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 106 | 2023-03-01T11:06:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vieveks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bee-Garbs/DialoGPT-cartman-small
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: mxbonn/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Beelow/model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taki-v3-50000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vieveks/taki-v3-50000", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BenQLange/HF_bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taki-v3-500000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vieveks/taki-v3-500000", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Beri/legal-qa
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 723.35 +/- 58.04
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | 2023-03-01T11:43:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce1-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.20 +/- 12.16
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-03-01T11:43:30Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Vi-test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Vi-test3
This model is a fine-tuned version of [HuyenNguyen/Vi-test1](https://huggingface.co/HuyenNguyen/Vi-test1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.