modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
DeskDown/MarianMixFT_en-th
|
[
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: clickbait_spoiling_model_trial_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clickbait_spoiling_model_trial_1
This model is a fine-tuned version of [intanm/practice-ft-qa](https://huggingface.co/intanm/practice-ft-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2621 | 1.0 | 1600 | 2.2366 |
| 2.0665 | 2.0 | 3200 | 2.2846 |
| 1.7666 | 3.0 | 4800 | 2.4600 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Despin89/test
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.32 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dev-DGT/food-dbert-multiling
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Devmapall/paraphrase-quora
|
[
"pytorch",
"jax",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mit_restaurant
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-finetuned-mit-restaurant-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mit_restaurant
type: mit_restaurant
config: mit_restaurant
split: validation
args: mit_restaurant
metrics:
- name: Precision
type: precision
value: 0.776800439802089
- name: Recall
type: recall
value: 0.7983050847457627
- name: F1
type: f1
value: 0.7874059626636947
- name: Accuracy
type: accuracy
value: 0.9116093286947559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-mit-restaurant-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the mit_restaurant dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3210
- Precision: 0.7768
- Recall: 0.7983
- F1: 0.7874
- Accuracy: 0.9116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6991 | 1.0 | 863 | 0.3478 | 0.7113 | 0.7684 | 0.7387 | 0.8994 |
| 0.2773 | 2.0 | 1726 | 0.3264 | 0.7533 | 0.7989 | 0.7754 | 0.9063 |
| 0.2164 | 3.0 | 2589 | 0.3137 | 0.7644 | 0.8045 | 0.7839 | 0.9121 |
| 0.1789 | 4.0 | 3452 | 0.3163 | 0.7755 | 0.7983 | 0.7867 | 0.9115 |
| 0.1573 | 5.0 | 4315 | 0.3210 | 0.7768 | 0.7983 | 0.7874 | 0.9116 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DevsIA/Devs_IA
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
Access to model whoishoa/ppo-LunarLander-v2 is restricted and you are not in the authorized list. Visit https://huggingface.co/whoishoa/ppo-LunarLander-v2 to ask for access.
|
DheerajPranav/Dialo-GPT-Rick-bot
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271276491885971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2076
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8128 | 1.0 | 250 | 0.3002 | 0.912 | 0.9097 |
| 0.2393 | 2.0 | 500 | 0.2076 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Dhruva/Interstellar
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Digakive/Hsgshs
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -111.40 +/- 50.72
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.5
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': 0.3
'repo_id': 'pittawat/LunarLander-v2-ppo'
'batch_size': 512
'minibatch_size': 128}
```
|
Dilmk2/DialoGPT-small-harrypotter
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | null |
---
license: creativeml-openrail-m
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 6B
## Model description
Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Model weights were initialized from the `uft-6b` ConvoGPT model made available in [this commit](https://huggingface.co/hakurei/convogpt/tree/41b67bfddb6cd97070ffddf708e9720c9cb8d224/6b-uft).
The model was then further fine-tuned on ~48.5 million tokens for ~5k steps on 4 NVIDIA A40s using DeepSpeed.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
We haven't played around with the model enough to enumerate them. Feel free to give us some feedback!
|
Dimedrolza/DialoGPT-small-cyberpunk
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- generated_from_trainer
datasets:
- xnli
metrics:
- accuracy
- f1
model-index:
- name: bert-base-arabic-camelbert-msa-sixteenth-xnli-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: xnli
type: xnli
config: ar
split: train
args: ar
metrics:
- name: Accuracy
type: accuracy
value: 0.767065868263473
- name: F1
type: f1
value: 0.767539058869847
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-arabic-camelbert-msa-sixteenth-xnli-finetuned
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the xnli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5796
- Accuracy: 0.7671
- F1: 0.7675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5804 | 1.0 | 12271 | 0.5796 | 0.7671 | 0.7675 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
DingleyMaillotUrgell/homer-bot
|
[
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: clickbait_spoiling_model_trial_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clickbait_spoiling_model_trial_2
This model is a fine-tuned version of [intanm/practice-ft-qa](https://huggingface.co/intanm/practice-ft-qa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 3.5171 |
| No log | 2.0 | 400 | 3.3595 |
| 3.442 | 3.0 | 600 | 3.3760 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
DivyanshuSheth/T5-Seq2Seq-Final
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 934.59 +/- 246.29
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dizoid/Lll
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 100 | 3.6478 |
| No log | 2.0 | 200 | 3.4720 |
| No log | 3.0 | 300 | 3.3860 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Dkwkk/W
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Dmitry12/sber
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: apache-2.0
widget:
- text: "Scene: Desert\n\nWalter: My name is Walter Hartwell White. I live at 308 Negra Arroyo Lane Albuquerque, New Mexico, 87104. To all law enforcement entities, this is not an admission of guilt. I am speaking to my family now. Skyler you are the love of my life. I hope you know that. Walter Junior you're my big man. There are going to be some things. Things that you'll come to learn about me in the next few days. I just want you to know that no matter how it may look, I only had you in my heart. Goodbye.\n\nScene: White Residence\n(Three weeks earlier)\n\nSkyler: Happy Birthday.\n\nWalter: Look at that.\n\nSkyler: That is veggie bacon. Believe it or not. Zero cholesterol. You won't even taste the difference. What time do you think you'll be home?\n\nWalter: Same time.\n\nSkyler: I don't want him dicking you around tonight. You get paid till 5, you work till 5, no later.\n\nWalter: Hey.\n\nWalter Jr: Happy birthday.\n\nWalter: Well, thank you.\n\nSkyler: You're late again.\n\nWalter Jr: There was no hot water again.\n\nSkyler: I have an easy fix for that. You wake up early, and then you get to be the first person in the shower.\n\nWalter Jr: I have an idea. How about buy a new hot water heater? How's that idea? For the millionth and billionth time.\n\nSkyler: Did you take your Echinacea?\n\nWalter: Yeah. I think it's getting better.\n\nWalter Jr: What the hell is this?\n\nWalter: It's veggie bacon. We're watching our cholesterol, I guess.\n\nWalter Jr: Not me. I want real bacon. Not this fake crap.\n\nSkyler: Too bad. Eat it.\n\nWalter Jr: This smells like Band-aids.\n\nSkyler: Eat it.\n\nWalter Jr: So, how's it feel to be old?\n\nWalter: How does it feel to be a smart ass?\n\nWalter Jr: Good.\n\nWalter: Eat your veggie bacon.\n\nScene: High School Parking Lot\n\nWalter: You all set?\n\nWalter Jr: Yeah, I'm fine.\n\nWalter: All right, see you at home.\n\nWalter Jr: Okay, see you.\n\nScene: Walt’s Classroom\n\nWalter: Chemistry. It is the study of what? Anyone? Ben.\n\nBen: Chemicals.\n\nWalter: Chemicals! No! Chemistry is well, technically, chemistry is the study of matter. But I prefer to see it as the study of change. Now just just think about this. Electrons. They change their energy levels. Molecules. Molecules change their bonds. Elements. They combine and change into compounds. Well, that's all of life. Right? I mean, it's just It's the constant. It's the cycle. It's solution, dissolution, just over and over and over. It is growth, then decay, then transformation. It is fascinating, really. Chad, is there something wrong with your table? Okay. Ionic bonds Are you done? Ionic bonds. Chapter 6.\n\nScene: Car Wash\n\nWalter: And 2, 3 makes 10, and 10 makes 20. Here's your receipt, and hand this claiming disc to your car wash professional. Thank you. Come again.\n\nBogdan: He's not coming. He said he quits. I'm gonna run the register.\n\nWalter: Bogdan, no. We talked about this.\n\nBogdan: I'm shorthanded, Walter. What am I to do? Walter? What am I to do?\n\nChad: Hey, Mr. White! Make those tires shine, huh?\n\nChad’s Girlfriend: Oh, my God. You would not believe who's cleaning Chad's car. Mr. White from Chemistry.\n\nScene: White Residence\n\nEveryone: Surprise!\n\nWalter Jr: Happy Birthday, Dad!\n\nSkyler: You're so very late.\n\nCarmen: Really, I'm serious, Skyler. I mean, you're flat as a washboard. You look awesome. She's not showing at all, is she?\n\nMarie: She's showing a little.\n\nSkyler: Carmen, this is my sister Marie.\n\nCarmen: Pleased to meet you.\n\nMarie: Hi.\n\nHank: Glock 22. It's my daily carry, okay? I mean, unless you're talking, what, plus, P-plus loads, you can forget the 9-mil, all right? I’ve seen one of those bounce off a windshield one time.\n\nSteve: Yeah, the way you sh**t.\n\nHank: If you're gonna bring a g*n, you gotta bring enough g*n. 40 caliber.\n\nWalter Jr: This is awesome right here.\n\nHank: Nice, isn't it?\n\nWalter Jr: Dad, come check this out.\n\nWalter: Yeah, I see it.\n\nWalter Jr: Come on, take it.\n\nHank: Check it out, Walt.\n\nWalter: No, no, it's just heavy.\n\nHank: That's why they hire men. Jesus, it's not gonna bite you, all right? Looks like Keith Richards with a glass of warm milk, doesn't he? Hey, Walt. Everybody listen up, listen up, listen up! I'm gonna give a toast. A little toast to my brother-in-law. Come here. Walt, you got a brain the size of Wisconsin, but we're not gonna hold that against you. Because your heart's in the right place, man. Your heart's in the right place. We love you, man. We love you. Everybody! To Walt! Nostrovia!\n\nEveryone: Nostrovia!\n\nHank: Oh shit, turn on Channel 3.\n\nHank(on the news): At which point we apprehended three individuals and placed them into custody. I'm proud to say the outstanding professionalism of my fellow agents at the Albuquerque District Office resulted in a substantial amount of methamphetamine being taken off the streets.\n\nReporter(on the news): Were any shots fired?\n\nHank(on the news): No, ma'am. Our agents took the suspects by surprise.\n\nSteve: Damn, the TV does add ten pounds.\n\nMarie: Ten pounds?\n\nHank: Hey, sit and spin. Both of you.\n\nSkyler: Hank.\n\nHank: What? Sorry. You didn't see that.\n\nSkyler: So charming.\n\nHank(on the news): This is clearly an ongoing operation, one which was well organized.\n\nWalter: Hank, how much money is that?\n\nHank: It's about 700 grand. That's a pretty good haul, huh?\n\nHank(on the news): As I say, it's a good day for the citizens of Albuquerque when we can put this big a dent in the local drug trade.\n\nWalter: Wow. But that's unusual, isn't it, that kind of cash?\n\nHank: Well, it's not the most we ever took. It's easy money until we catch you. Walt, just say the word and I'll take you on a ride-along. You can watch us knock down a meth lab. Get a little excitement in your life.\n\nWalter: Well, someday.\n\nScene: Walt and Skyler’s Bedroom\n\nWalter: Which one's this?\n\nSkyler: That faux-Lalique vase I picked up at the Super-Swap.\n\nWalter: How's it doing?\n\nSkyler: I met my reserve, and there's still two minutes.\n\nWalter: What's up?\n\nSkyler: You tell me, birthday boy. Oh, hey, so what's up for Saturday?\n\nWalter: Car wash. Bogdan says he needs me.\n\nSkyler: Until what time? Noon? 1-ish?\n\nWalter: Probably 2, more like it.\n\nSkyler: And then what after that?\n\nWalter: Actually I was thinking of driving up to Los Alamos. The visitor center has an exhibit on that’s really supposed to be...\n\nSkyler: You're not gonna paint?\n\nWalter: I'll paint. It's just that this part of this exhibition on the Mars Rover photographs are the detail really is just supposed to be amazing.\n\nSkyler: It's just that I really need you to paint at some point. I mean, the sooner that back bedroom gets finished. And I'd do it myself, except you said you don't want me standing on the stepladder.\n\nWalter: I'll paint. I will paint.\n\nSkyler: What is going on down there?\n\nWalter: No, it's just...\n\nSkyler: Is he asleep?\n\nWalter: No, It's nothing. You know, just you know, we gotta be careful about the baby.\n\nSkyler: Don't worry about the baby. This is just for you. We are just doing you tonight. So just close your eyes. Relax, and let it. Close your eyes.\n\nWalter: Okay.\n\nSkyler: There you go. That's it. That's it. There you go. Keep it going. Keep it going. Keep it going. Keep Yes! 56!\n\nScene: Ambulance\n\nWalter: This is so embarrassing. I am fine. Honestly. It's just some bug going around. First my wife had it, then my son, and now me. It's just like a chest cold. Could be some low blood sugar as well. I didn't have the greatest breakfast this morning, honestly. Hey, listen, can you do me a favor? Can you just drop me off at a corner somewhere?\n\nEMT: No. Sorry.\n\nWalter: It's just that I don't have the greatest insurance.\n\nEMT: Take a couple of deep breaths for me. Is there anybody you want us to contact for you?\n\nWalter: God, no.\n\nEMT: Lean forward for me, would you? Mr. White, are you a smoker?\n\nWalter: No. Never. Why do you ask?\n\nScene: Doctor’s Office\n\nDoctor: Mr. White. Mr. White?\n\nWalter: Yes.\n\nDoctor: You understood what I've just said to you?\n\nWalter: Yes. Lung cancer. Inoperable.\n\nDoctor: I'm sorry I just need to make sure you fully understand.\n\nWalter: Best-case scenario, with chemo, I'll live maybe another couple years. It's just you've got mustard on your...right there. Mustard, there. Right there.\n\nScene: White Residence\n\nSkyler: So my records show that I paid it, and I certainly don't feel that we owe any late...All right. Well, I'll check with the bank and maybe the post office, if they lost it or something. Yeah, let me look into that. Okay. Thank you. Did you use the MasterCard last month?\n\nWalter: We needed printer paper.\n\nSkyler: Walt, the MasterCard's the one we don't use.\n\nWalter: Okay.\n\nSkyler: So how was your day?\n\nWalter: Oh, I don't know. I don't know. It was, um it was fine.\n\nScene: Car Wash\n\nBogdan: Come on. I'm shorthanded. I need you to do some wipe-downs. Come on.\n\nWalter: What?\n\nBogdan: I said I need you outside to do some wipe-downs. Are you here to work or to be staring at the skies? Come on, let's go. Come on, man.\n\nWalter: f*ck you, Bogdan.\n\nBogdan: What?\n\nWalter: I said f*ck you! And your eyebrows! Wipe down this!\n\nScene: White Residence-backyard\n\nWalter: Uh, Hank. Hank, it's Walt. Hey. Oh, listen I didn't wake you, did I? Oh, good, good. No, no, nothing's wrong. I just, uh I've been, uh, thinking about that offer that ride-along.\n\nScene: Hank’s Car\n\nHank: It's the last house on the right. See it? Not the two-story one. The one next to it. The kind of I don't know, what do you call that? Green?\n\nSteve: Sage.\n\nHank: Sage. What, do you work at the f*cking Pottery Barn? Jesus.\n\nSteve: Sage. That's the word for it. My fault the only word your dumb ass knows is green?\n\nHank: Cheese dick. I know that one. How 'bout that? Anyway, it's the sage one. See it?\n\nWalter: So what tells you it's a meth lab?\nHank: Just our snitch. Says some dude goes by Cap'n Cook lives up to his name in there. Says he always adds a dash of chili powder. Ah, you exuberant Mexicans.\n\nSteve: Uh-uh. Cap’n Cook, that's a white boy's name. Dopey as hell, too.\n\nHank: Yeah? Tell you what. I got 20 bucks that says he's a beaner.\n\nSteve: All right. You're on.\n\nHank: All right, come on, come on. All right. School bus is clear. Got the green light.\n\nAgent: Copy that.\n\nHank: Watch this. This makes 'em shit.\n\nAgent: Go, go, go.\n\nHank: Meth labs are nasty on a good day. You mix that shit wrong, you got mustard gas.\n\nWalter: Phosphine gas. I think.\n\nHank: Yeah, exactly. One whiff will k*ll you. That's why the respirators.\n\nAgent: House is clear. One suspect in custody.\n\nHank: Copy that. The suspect, might he be of the Latino persuasion?\n\nAgent: Driver's license says Emilio Koyama.\n\nSteve: Asian! Pay up, sucker.\n\nHank: Hey hey hey! First name Emilio. That's at least half a beaner. Tell you what, I'll let you off for a 10. Cheer up, Gomey. You people still got J. Lo.\n\nWalter: Hank, do you think I might get to go inside? See the actual lab?\n\nHank: Yeah. Yeah, I tell you what, we're gonna go peek our heads in, check it out first. Stay here a minute.\n\nJesse: God.\n\nWalter: Oh, my God. Pinkman?\n\nScene: Jesse’s House\n\nWalter: It's me. I'm alone.\n\nJesse: How'd you find me?\n\nWalter: You're still in our filing system. So your aunt owns this place, right?\n\nJesse: I own it.\n\nWalter: No one's looking for you.\n\nJesse: Why are you here?\n\nWalter: I was curious. Honestly, I never expected you to amount to much, but methamphetamine? I didn't picture that. There's a lot of money in it, huh?\n\nJesse: I don't know what you're talking about.\n\nWalter: No?\n\nJesse: Not a clue.\n\nWalter: Cap'n Cook? That's not you? Like I said, no one is looking for you.\n\nJesse: Look, I don't know what you think you're doing here, Mr. White. I mean, if you're planning on giving me some bowl winder about getting right with Jesus by turning myself in...\n\nWalter: Not really.\n\nJesse: High school was a long time ago. You ain't Welcome Back Kotter, so step off. No speeches.\n\nWalter: Short speech. You lost your partner today. What's his name? Emilio? Emilio is going to prison. The DEA took all your money, your lab. You got nothing. Square 1. But you know the business. And I know the chemistry. I'm thinking maybe you and I could partner up.\n\nJesse: You want to cook crystal meth? You? You and, uh and me?\n\nWalter: That's right. Either that or I turn you in.\n\nScene: White Residence\n\nMarie: What the hell is this?\n\nSkyler: Damned if I know. I described it as mosaic folk art.\n\nMarie: Somebody bought it?\n\nSkyler: Yeah, some guy in Minneapolis. 14 dollars plus shipping.\n\nMarie: Yes! At this rate, in 50 or 60 years, you'll be rich. So how goes the novel?\n\nSkyler: It's not a novel, actually, which I have...\n\nMarie: You're not writing a novel? You told me you were.\n\nSkyler: No. Short stories. I said that if eventually I have enough good ones that maybe I'll try and publish another collection.\n\nMarie: Those really didn't sell. I just thought a novel would be easier to sell.\n\nSkyler: Yeah, well, maybe so.\n\nMarie: Ever want me to read anything, I could critique it for you.\n\nSkyler: No. I mean, I'm not at that stage where I...no.\n\nMarie: Open offer. So what's up with Walt lately?\n\nSkyler: What do you mean? He's fine.\n\nMarie: He just seems, I don't know, quieter than usual.\n\nSkyler: Turning 50 is a big deal. I mean, I'm sure as hell not looking forward to 40. You're gonna be a complete basket case.\n\nMarie: So it's a mid-life crisis.\n\nSkyler: No, he's just quiet.\n\nMarie: How's the sex?\n\nSkyler: Marie, Jesus.\n\nMarie: Guess that answers that.\n\n\nScene: Jesse’s House\n\nWalter: You just gonna sit there? This. Look at this. Kjeldahl-style recovery flask, Very rare. You got your usual paraphernalia: Griffin beakers, your Erlenmeyer flask. But the piece de resistance: a round bottom boiling flask.\n\nJesse: Well, I cook in one of those. The big one.\n\nWalter: One of these? No, this is a volumetric flask. You wouldn't cook in one of these.\n\nJesse: Yeah, I do.\n\nWalter: No, you don't. A volumetric flask is for general mixing and titration. You wouldn't apply heat to a volumetric flask. That's what a boiling flask is for. Did you learn nothing from my chemistry class?\n\nJesse: No. You flunked me. Remember?\n\nWalter: No wonder.\n\nJesse: Prick. Now let me tell you something else. This ain't chemistry, this is art. Cooking is art. And the shit I cook is the b*mb, so don't be telling me.\n\nWalter: The shit you cook is shit. I saw your setup. Ridiculous. You and I will not make garbage. We will produce a chemically pure and s*ab product that performs as advertised. No adulterants. No baby formula. No chili powder.\n\nJesse: No, no, chili P is my signature.\n\nWalter: Not anymore.\n\nJesse: Yeah, well, we'll see about that. What the hell is this?\n\nWalter: Lab safety equipment. We're also gonna have an emergency eye wash station. These chemicals and their fumes are toxic, in case you didn't know that.\n\nJesse: Well, you can dress up like a f*g if you want. Not me. Listen, this stuff doesn't stay more than a day.\n\nWalter: What? I thought we were gonna cook here.\n\nJesse: No, we're not gonna cook here. Okay, this is my house. I don't shit where I eat.\n\nWalter: Well, then, where are we gonna work?\n\nJesse: You tell me. This is your deal. You want to smoke it up, smoke it up at your house. Nah, I didn't think so.\n\nWalter: Oh, well. Well what if we rented one of those self-storage places, you know, those little orange garages, worked out of there?\n\nJesse: No. They're on to that. They got dogs that sniff around. RV. That's what you want.\n\nWalter: What, like a Winnebago?\n\nJesse: Yeah. I know a dude who wants to sell his. He just goes camping with it. But a mobile meth lab? That'd be the b*mb. I mean, drive way out in the boonies. Be all evasive.\n\nScene: Bank Parking Lot\n\nJesse: Dude, this isn't even 7 grand. My guy wants 85.\n\nWalter: This is all the money I have in the world. You're a drug dealer. Negotiate.\n\nJesse: You are not how I remember you from class, I mean, like, not at all.\n\nWalter: I gotta go.\n\nJesse: Wait, wait. Hold on. Tell me why you're doing this. Seriously.\n\nWalter: Why do you do it?\n\nJesse: Money, mainly.\n\nWalter: There you go.\n\nJesse: Nah, come on! Man, some straight like you, giant stick up his ass, all of a sudden at age, what, 60, he's just gonna break bad?\n\nWalter: I'm 50.\n\nJesse: It's weird is all, okay? It doesn't compute. Listen if you've gone crazy or something I mean, if you've if you've gone crazy or depressed, I'm just saying that's something I need to know about. Okay? I mean, that affects me.\n\nWalter: I am awake.\n\nJesse: What?\n\nWalter: Buy the RV. We start tomorrow.\n\nScene: The Mall\n\nSkyler: How's it coming in there?\n\nWalter Jr: Fine.\n\nSkyler: Do you want me or your dad?\n\nWalter Jr: Dad.\n\nSkyler: So how are those feeling in the waist? Are they too tight? 'Cause you don't want to get 'em if they're too tight.\n\nWalter Jr: They're pre-shrunk.\n\nSkyler: Are you sure you don't want to get a different kind? Like, you know, the skinny jeans? Those are really supposed to be in style now. The skaters wear them.\n\nWalter Jr: Do I look like a skater?\n\nSkyler: All right.\n\nTeenager: Mom, look at my big-boy pants. Mommy, could you zip up my big-boy pants?\n\nWalter: Don't.\n\nSkyler: What?\n\nWalter: Don't.\n\nSkyler: Walt.\n\nWalter Jr: Where...\n\nSkyler: I have no idea. You know what? Don't even look at them. They're obviously very stupid. Yep. I think that, um I think those jeans look really good on you. You should get 'em if you like 'em, okay? Why don't you just hang out here for a second? I'll be right back.\n\nWalter Jr: Fine.\n\nTeenager: Mommy, I think I pinched a loaf in my brand-new big-boy pants. What are you doing?\n\nWalter: What's wrong, chief? Having a little trouble walking?\n\nTeenager: Get off me. Get off me! I'll mess you up, man.\n\nWalter: Well, you'll have one sh*t. You better make it good. What, are you waiting for your girlfriends? You better go. Take it. Take your sh*t. Take it! Come on. Come on.\n\nTeenager: Come on, let's get outta here. Let's go. Psycho.\n\nScene: Desert\n\nJesse: Yeah, nothing but cows! Got some big cow house way out that way, like 2 miles, but I don't see nobody.\n\nWalter: Cow house?\n\nJesse: Yeah, where they live. The cows. Whatever, man. Yeah, let's cook here.\n\nWalter: Cow house. God help me.\n\nJesse: What are you doing?\n\nWalter: These are my good clothes. I can't go home smelling like a meth lab.\n\nJesse: Yeah, you can. I do. Those? Those, uh You're keeping those on, right?\n\nWalter: Come on. Daylight's burning.\n\nJesse: Oh, my God. Oh, this is, uh this is a good look for you. And you're maybe only the world's second biggest h*m*.\n\nWalter: Would you shut up and help me?\n\nJesse: Oh, yeah. Oh, yeah, work it. Baby, work it.\n\nWalter: Turn that off!\n\nJesse: This is glass grade. I mean, you got...Jesus, you got crystals in here two inches, three inches long. This is pure glass. You're a g*dd*mn artist. This is art, Mr. White.\n\nWalter: Actually, it's just basic chemistry, but thank you, Jesse. I'm glad it's acceptable.\n\nJesse: Acceptable? You're the g*dd*mn Iron Chef. Every jibhead from here to Timbuktu is going to want a taste. Now I gotta try this.\n\nWalter: No. No. No, we only sell it. We don't use it.\n\nJesse: Okay, since when? Listen, you've been watching way too much Miami Vice. That ain't happening.\n\nWalter: So what now? How do we proceed?\n\nJesse: We cook more tomorrow. Meantime I know just the guy to talk to.\n\nScene: Krazy-8’s House\n\nJesse: Kraze, how you doing, my man? You got a new dog. Right on, man. What's his name? Yeah, I had a dog like that once, except maybe, like, twice as big. Super purebred. Now, me personally, I would train him to go straight for the nuts...\n\nKrazy-8: Just shut your mouth and show me your money.\n\nJesse: I ain't buying, ese. I'm selling. Tell me that ain't the finest scante you ever laid eyes on. Go ahead, try it. Hey, poochie. How you doing? Jesus Christ. See? What'd I say?\n\nKrazy-8: It's all right.\n\nJesse: It's all right? It's all right?\n\nKrazy-8: Yeah, it's all right. So, what? You back in business?\n\nJesse: Hell, yeah, I'm back. With a vengeance. Vato loco gotta make a living. You know, with your cousin gone away and all. And listen, homes, about that. It really broke me up about Emilio. That dude is like my brother. He okay? You talk to him?\n\nKrazy-8: Yeah, yeah, I talked to him. He said when the Feds came, you were out sticking it in some neighbor lady.\n\nJesse: Hey, you know, I got lucky twice.\n\nKrazy-8: I don't know, man. Emilio, he thinks maybe you dimed on him.\n\nJesse: That is bullshit. That is bullshit, Krazy-8! I should kick his punk ass for even thinking that. You know what? Next time you talk to Emilio, you tell him for me, all right?\n\nKrazy-8: Why don't you tell him yourself? Made bail this morning.\n\nEmilio: Go ahead, pendejo. Kick my ass.\n\nJesse: Hey, listen...\n\nKrazy-8: Where did you get this? Because I know your little punk ass didn't cook it.\n\nScene: Desert\n\nKrazy-8: Hey, man. You some kind of nudist? That's some stone-fine tick tick you been cooking there, ese. How about you come work for me?\n\nWalter: I'd be willing to sell it to you if the price is right.\n\nKrazy-8: You out here all by yourself, huh?\n\nEmilio: I know you. He was there when I got busted. He's with the DEA!\n\nWalter: No.\n\nEmilio: You ratasnitch f*ck!\n\nJesse: Run, Mr. White! Run!\n\nEmilio: I say we cap 'em both.\n\nKrazy-8: Hey, you really cook up that batch?\n\nWalter: Yeah.\n\nKrazy-8: You an artist. It's a damn shame.\n\nWalter: Wait! Wait a minute. Listen to me. I'll teach you my recipe. What do you say? You want to cook like me? You let us both live and I will teach you. Put the cigarette out. Please.\n\nEmilio: Move it, homes. We ain't got all day.\n\nWalter: Okay.\n\nJesse: What happened? What'd you do to them?\n\nWalter: Red phosphorus in the presence of moisture and accelerated by heat yields phosphorus hydride. Phosphine gas. One good whiff and...we gotta, we gotta clean this up.\n\nScene: Walt and Skyler’s Bedroom\n\nSkyler: Where were you? Walt. I don't know what's been going on with you lately, but...\n\nWalter: Nothing. I'm fine.\n\nSkyler: Whatever it is, I'll tell you this. I do not like it when you don't talk to me. The worst thing you can do is shut me out. Walter, is that you?\n\n"
- text: "Jim: Hey.\n\nDwight: Hello. Jim?\n\nJim: What's up, buddy?\n\nDwight: This is not funny. Why is my stuff in here?\n\nJim: Wow, that's weird. Oh, dollar for a stapler, that's pretty good.\n\nDwight: Yeah, well, I'm not paying for my own stuff, okay? I know you did this, because you're friends with the vending machine guy.\n\nJim: Who, Steve?\n\nDwight: Yeah, Steve, whatever his name is.\n\nPam: Sorry. What do I want? What do I want... Oh, it's a pencil cup.\n\nDwight: No, no, no, no, no. That's my pencil cup.\n\nPam: Um, I don't think so, I just bought it.\n\nDwight: Uh, I think so, and you're going to hand it over to me.\n\nPam: I love these.\n\nDwight: Okay, fine. Where's my wallet?\n\nJim: Oh, there it is. J1.\n\nDwight: But I don't have any...\n\nJim: Here, you know what? You can have some nickels.\n\nDwight: [putting quarters in] Five, ten, fifteen, twenty, twenty-five...\nMichael: Hello, everyone.\n\nDwight: Good morning, Michael.\n\nPhyllis: Where are we going this afternoon?\n\nMichael: Ah! Ha ha ha!\nPam: Last week, Michael sent out this mysterious memo.\n\nJim: 'It's time for our first quarter camaraderie event, so pack a swimsuit, a toothbrush, rubber-soled shoes, and a ski mask.'\n\nPam: A ski mask and a swimsuit.\n\nJim: So that he can have us rob a bank, and then escape through the sewers.\n\nPam: And brush our teeth.\nMichael: Yeah?\n\nStanley: Michael.\n\nMichael: Stanley! Bo banley.\n\nStanley: I need to know...\n\nMichael: Banana fana fo fanley.\n\nStanley: What we're doing.\n\nMichael: Be my mo manley.\n\nStanley: You said bring a toothbrush.\n\nMichael: Stanley.\n\nStanley: Is this an overnight?\n\nMichael: Maybe. The suspense is just so exciting, isn't it?\n\nStanley: Should my wife tell her boss she's not coming in tomorrow?\n\nMichael: Maybe, I don't know.\n\nStanley: Not maybe. Yes or no.\n\nMichael: Well, no. But... okay, don't spoil it for everybody, all right? But we are going on a booze cruise on Lake Wallenpaupack.\n\nStanley: In January?\n\nMichael: It's cheaper.\nMichael: This is not just another party. This is a leadership training exercise. Right? I'm going to combine elements of fun and motivation and education into a single mind-blowing experience.\nMichael: It is now time to unveil the destination of this year's retreat. We are going on a harbor cruise of Lake Wallenpaupack. It's a booze cruise!\n\nMeredith: All right!\n\nRyan: I have a test for business school tomorrow night. Is it okay if I skip the cruise and study for that?\n\nMichael: No. This is mandatory. But don't worry, you know what? You're gonna learn plenty. This is gonna turn your life around, Ryan.\n\nRyan: I'm already in business school.\n\nMichael: Well, this...\n\nKelly: Wait, Michael?\n\nMichael: Yeah?\n\nKelly: Why did you tell us to bring a bathing suit?\n\nMichael: To throw you off the scent.\n\nKelly: Yeah, but I bought a bathing suit.\n\nMichael: Well, just keep the tags on and you can return it.\n\nKelly: I took the tags off already.\n\nMichael: Well, that's not my fault, okay? Just.. we're not going to pay for a bathing suit. Okay, I know what you're all thinking, 'Who is this smart little cookie?' Her name is Brenda... something, and she is from corporate. And she is here, like you, to learn from what I have to say.\nMichael: I am a great motivational speaker. I attended a Tony Robbins event by the airport last year, and... it wasn't the actual course. You have to pay for the actual course. But it talked about the actual course. And I've incorporated a lot of his ideas into my own course.\nMichael: Leader... ship. The word 'ship' is hidden inside the word 'leadership,' as its derivation. So if this office is, in fact, a ship, as its leader, I am the captain. But we're all in the same boat. Teamwork!\nOscar: Last year, Michael's theme was 'Bowl over the Competition!' So guess where we went.\nMichael: Now, on this ship that is the office, what is a sales department? Anyone?\n\nDarryl: How about the sales department is the sails?\n\nMichael: Yes, Darryl, the sales department makes sales. Good. Let me just explain. I see the sales department as the furnace.\n\nPhyllis: A furnace?\n\nJim: Yeesh, how old is this ship?\n\nPam: How about the anchor?\n\nPhyllis: What does the furnace do?\n\nMichael: All right, let's not get hung up on the furnace. This just... it's the sales... I see the sales department down there. They're in the engine room, and they are shoveling coal into the furnace, right? I mean, who saw the movie Titanic? They were very important in the movie Titanic. Who saw it? Show of hands!\n\nJim: I'm not really sure what movie you're talking about. Are you sure you got the title right?\n\nMichael: Titanic?\n\nPam: I think you're thinking of The Hunt for Red October.\n\nMichael: No, I'm Leo DiCaprio! Come on!\nJim: Michael stands in the front of the boat and says that he's king of the world within the first hour, or I give you my next paycheck.\nPhyllis: Michael, everyone in the engine room drowned.\n\nMichael: No! Thank you, spoiler alert. You saw the movie, those of you who did. They're happy down there in the furnace room. And they're dirty and grimy and sweaty, and they're singing their ethnic songs, and... actually, that might be warehouse.\n\nDarryl: What?\n\nMichael: The... no, no. No, I didn't... okay. Well, okay, in a nutshell, what I'm saying is... leadership. We'll talk more about that on the boat. Ship.\n\nDwight: Aye aye, Captain.\nMichael: [singing] A three-hour tour, a three-hour tour.\nMichael: Pam, you are Mary Ann! We have the Professor and Ginger, welcome aboard. Angela, you are Mrs. Howell. Lovey. [to Kelly] Uh... the native. Sometimes they come from neighboring... [to Stanley] We have one of the Globetrotters, I am the Skipper, and Dwight, you will be Gilligan.\n\nDwight: Cool.\n\nCaptain Jack: Actually, I'm the Skipper. But you can be Gilligan.\n\nMichael: I'd rather die. Hi, I am Michael Scott, I am the captain of this party.\n\nCaptain Jack: I am Captain Jack, I am captain of the ship. I'm also captain of anyone who sets foot on the ship. [to boarding passengers] Hi, welcome aboard.\n\nMichael: Okay.\nMichael: In an office, when you are ranking people, manager is higher than captain. On a boat, who knows? It's nebulose.\nMichael: Hey, look! I'm king of the world!\nCaptain Jack: Okay, all right! Welcome aboard! I am your captain, Captain Jack.\n\nMichael: And I am the regional manager of Dunder-Mifflin, Michael Scott. Welcome, welcome!\n\nCaptain Jack: Okay! So...\n\nMichael: Okay! So...\n\nCaptain Jack: Please. The life preservers.\n\nMichael: Right.\n\nCaptain Jack: They are located underneath the seats, all along the border of the boat.\n\nMichael: But don't worry, you are not going to be needing life preservers tonight.\n\nCaptain Jack: Well, we might, okay? Please let me finish, okay? Thank you. So, the Coast Guard requires that I tell you where the safety exits are. On this ship, it's very easy. Anywhere over the side. [Dwight laughs loudly.] Not only am I your ship captain, I am also your party captain! Whoo! We're gonna get it going in just a few minutes here...\n\nMichael: I'm your party captain too! And you are gonna put on your dancing shoes later on! So we are gonna...\n\nCaptain Jack: Okay, Michael, if you don't mind...\n\nMichael: Rock it!\n\nCaptain Jack: Please, okay?\n\nMichael: If the boat's a-rockin', don't come knockin'!\n\nCaptain Jack: Michael.\n\nMichael: Yep.\n\nCaptain Jack: Your company's employees are not the only people on the boat tonight, okay?\n\nMichael: We're all gonna have a good time tonight!\n\nCaptain Jack: Why don't you let me and my crew do our job. You just sit back and have a good time. All right?\n\nMichael: Hm? Okay. Yep.\nKaty: You guys, it's like we're in high school and we're at the cool table. Right?\n\nRoy: Yeah.\n\nKaty: Pam, were you a cheerleader?\n\nRoy: No, she was totally Miss Artsy-Fartsy in high school. She wore the turtleneck and everything!\n\nKaty: That's hilarious.\n\nJim: It's not hilarious, but...\n\nRoy: Where did you go to school?\n\nKaty: Bishop O'Hara.\n\nRoy: Piss slop who cares-a? We played you! You... you really look familiar. Did you... you cheered for them, didn't you?\n\nJim: Um, no.\n\nKaty: Yes, I did! [chanting] A-W-E-S-O-M-E! Awesome! Awesome is what we are! We're the football superstars! A-W-E-S-O-M-E!\n\nRoy: I remember that! We crushed you like 42-10!\nMichael: Having fun?\n\nBrenda: Yeah. Everybody's really nice.\n\nMichael: Good. Well, that is what Scranton is all about. Not like you New Yawkers.\n\nBrenda: When are you going to start the presentation?\n\nMichael: Well, we already sort of started it back at the office and on the dock with the Gilligan thing, so... right now, I was thinking. Yes. Okay, listen up all you Dunder-Mifflinites! I would like to talk to you all about life preservers. Now, one important life preserver in business is IT support.\n\nCaptain Jack: Not now, Mike, we're doing the limbo! That's right, partiers, it's time to limbo, limbo, limbo!\n\nMichael: So, okay.\n\nDwight: Limbo, whoo!\n\nCaptain Jack: All right! I need a volunteer to come up here and hold my stick. Who's it gonna be?\n\nMeredith: Me.\n\nCaptain Jack: Okay...\n\nDwight: Me! Me, me, me.\n\nCaptain Jack: Uh... usually it's a woman.\n\nDwight: I'm stronger.\n\nCaptain Jack: Hey, I got an idea! How would you like to steer the ship, Dwight?\nCaptain Jack: Keep us on a steady course. Keep a sharp eye out. I'm counting on you!\nDwight: I was the youngest pilot in Pan Am history. When I was four, the pilot let me ride in the cockpit and fly the plane with him. And I was four. And I was great. And I would have landed it, but my dad wanted us to go back to our seats.\nCaptain Jack: All right, all right, that was great! Now it's time for the dance contest!\n\nMichael: But before that, I have to do my presentation.\n\nCaptain Jack: Nope! Dance contest!\n\nMichael: All right, we'll have a motivational dance contest! Hit it! Yeah, okay, dancing! It is a primal art form used in ancient times to express yourself with the body and communicate!\nMichael: Sometimes you have to take a break from being the kind of boss that's always trying to teach people things. Sometimes you have to just be the boss of dancing.\nDwight: [singing] What do you do with a drunken sailor? What do you do with a drunken sailor? What do you do with a drunken sailor early in the morning?\n\nAngela: Hey, come inside and talk to me.\n\nDwight: I can't. Do you want us to run aground, woman?!\nDarryl and Katy: [chanting] Snorkel sh*t! Snorkel sh*t!\n\nRoy: Whoo! Who's next? Come on, Pam! Come on! Come on!\n\nPam: No, I'm not going to do that.\n\nRoy: Come on!\n\nDarryl: That's what I'm talking about!\n\nPam: Hey, why don't we find like a quieter place to hang out?\n\nRoy: I've just gotta wait for Darryl to do his sh*t. Just a minute. Come on! [chanting] Darryl! Darryl!\nPam: It's getting kind of rowdy down there.\n\nJim: Yeah. [chanting] Darryl! Darryl! Darryl!\n\nPam: Sometimes I just don't get Roy.\n\nJim: Well...\n\nPam: I mean, I don't know. So... what's it like dating a cheerleader?\n\nJim: Oh, um... [A long silence.]\n\nPam: I'm cold.\nCaptain Jack: So, what's this presentation all about?\n\nMichael: Ah! See, this is of general interest. It is about priorities and making decisions, using the boat as an analogy. What is important to you? If the boat is sinking, what do you save?\n\nCaptain Jack: Women and children.\n\nMichael: No, no. Salesmen and profit centers.\n\nCaptain Jack: That's a stupid analogy.\n\nMichael: Okay, well, obviously you don't know anything about leadership.\n\nCaptain Jack: Well, I was the captain of a PC-1 Cyclone Coastal Patrol Boat during Desert Storm.\n\nDwight: Wow. You should be the motivational speaker.\n\nMichael: Okay.\n\nDwight: Yeah. He gives me real responsibility, Michael. Captain Jack delegates. He's let me steer the ship for the last hour.\nKaty: I'd like to be engaged. How did you manage to pull that off?\n\nPam: Uh, I've been engaged for three years, and there's no end in sight. So... you don't wanna ask my advice.\nCaptain Jack: Suppose your office building's on fire. Jim, who would you save?\n\nJim: Um... let's see, uh... The customer. Because the customer is king.\n\nMichael: Not what I was looking for, but a good thought.\n\nCaptain Jack: He's just sucking up!\n\nRoy: When you were in the Navy, did you ever almost die?\n\nCaptain Jack: Oh yeah, oh yeah. And I wasn't thinking about some customer. I was thinking about my first wife. The day I got back on shore, I married her.\nJim: You know what? I would save the receptionist. I just wanted to clear that up.\nRoy: Hello, everybody, could I have your attention for just a second? Could you listen to me for a second? We were up at the front, and we were talking about what's really important, and... Pam, I think enough is enough. I think we should set a date for our wedding. How about June 10th? Come on, let's do it! Come on, Pam!\nMichael: I don't want to take credit for this, but Roy and I were just having a conversation about making commitments and making choices. Right? Did I motivate you?\n\nRoy: No, it was Captain Jack.\n\nMichael: Well... could have been either one of us, because we were pretty much saying the same thing. Congratulations. That is great!\n\nCaptain Jack: We gotta celebrate! Hey, I got an idea, I got an idea. I can marry you right now, as captain of the ship!\n\nMichael: Yes! I can marry you as regional manager of Dunder-Mifflin!\n\nPam: No, no, I want my mom and dad to be there.\n\nMichael: Then I'll give you away!\n\nPam: No, thank you.\nKaty: Do you think that'll ever be us?\n\nJim: No.\n\nKaty: What is wrong with you? Why did you even bring me here tonight?\n\nJim: I don't know. Let's break up.\n\nKaty: Whoa. What?\nCaptain Jack: This is where Captain Jack drives the boat.\n\nMeredith: Wow!\nDwight: Seasick? Captain Jack says you should look at the Moon.\n\nMichael: Captain Jack is a fart face. I'm on medication.\n\nBrenda: Really? What?\n\nMichael: Vomicillin. Okay. All right. It's time to be boss. It's time to motivate. Let's blow some minds here. Okay, guys, guys, cool it. Everybody, Dunder-Mifflin Scranton employees, Brenda, I have some very, very urgent news I need to tell everybody right now. Listen up. The ship is sinking! Okay? We're going down, right now. Just wrap your heads around the reality of that. Shh, please! Everybody, it's my turn now, okay? Captain Jack is gone. In five minutes, this ship is going to be at the bottom of the lake! And there aren't enough spaces on the lifeboat! Who are we gonna save? Do we save sales? Do we save customer service? Do we save accounting? This is a business scenario. Right? It's a scary... it's a...\n\nCaptain Jack: Hey! Hey! What the hell is going on here?\n\nMichael: It's a predicament, and it's something that each and every one of us has to think about.\nMichael: I'm in the brig. See? The boat's not as corporate-friendly as advertised. What was the deal with the guy jumping overboard? What was... if he had just waited and heard what I had to say, he would be motivated right now and not all wet.\nMichael: Is somebody there?\n\nJim: What happened to you?\n\nMichael: Captain Jack has a problem with authority.\n\nJim: Oh, right, because you announced that his ship was sinking?\n\nMichael: He just totally lost it. If you ask me, he caused the panic.\n\nJim: What a night.\n\nMichael: Well, it's nice for you. Your friend got engaged.\n\nJim: She was always engaged.\n\nMichael: Roy said the first one didn't count.\n\nJim: That's... great. You know, to tell the truth, I used to have a big thing for Pam, so...\n\nMichael: Really? You're kidding me. You and Pam? Wow. I would have never have put you two together. You really hid it well. God! I usually have a radar for stuff like that. You know, I made out with Jan...\n\nJim: Yeah, I know.\n\nMichael: Yeah? Yep. Well, Pam is cute.\n\nJim: Yeah. She's really funny, and she's warm. And she's just... well, anyway.\n\nMichael: Well, if you like her so much, don't give up.\n\nJim: She's engaged.\n\nMichael: BFD. Engaged ain't married.\n\nJim: Huh.\n\nMichael: Never, ever, ever give up.\nDwight: Don't worry, Michael. I'm taking us to shore.\n\nMichael: It's a fake wheel, dummy.\n"
- text: "PROLOGUE\n\nEXT. HOUSE - NIGHT\n\nLawrence, Kansas\n\n22 years ago\n\nThese scenes are definitively dated to 2 Nov 2005.\n\nCrickets chirp. A large deciduous tree with no leaves stands outside one of several suburban homes.\n\nINT. NURSERY - NIGHT\n\nA Woman, Mary Winchester, wearing a white nightgown, carries a SMALL CHILD, her son Dean, into a dark room.\n\nMary: Come on, let's say good night to your brother.\n\nMary turns on the lights: it's the nursery of a BABY, Sam, who is lying in his crib and looking over at Mary and Dean. Mary sets Dean down. Dean leans over the side of the crib and kisses Sam on the forehead.\n\nDean: 'Night, Sam.\n\nMary leans over Sam as well.\n\nMary: Good night, love.\n\nMary brushes Sam's hair back and kisses his forehead.\n\nMan: Hey, Dean.\n\nDean turns. The Man in the doorway wearing a USMC T-shirt is John. Dean rushes over to him.\n\nDean: Daddy!\n\nJohn: Hey, buddy.\n\nJohn scoops Dean up.\n\nJohn: So what do you think? You think Sammy's ready to toss around a football yet?\n\nDean shakes his head, laughing.\n\nDean: No, Daddy.\n\nJohn laughs.\n\nJohn: No.\n\nMary passes John and Dean on the way out of the room.\n\nMary: You got him?\n\nJohn: I got him.\n\nJohn hugs Dean closer.\n\nJohn: Sweet dreams, Sam.\n\nJohn carries Dean out of the room, flipping off the lights. Sam watches them go, gurgling, then tries to reach his toes.\n\nThe baseball-themed mobile above Sam's crib begins to spin on its own while Sam watches. The transportation-themed clock on the wall ticks, ticks, stops. The moon-shaped nightlight flickers.\n\nINT. MASTER BEDROOM - NIGHT\n\nLights flicker on a baby monitor sitting on a nightstand next to a photo of Mary and John. Strange noises come through the monitor. Mary, asleep in bed, stirs. She turns on the light on the nightstand.\n\nMary: John?\n\nMary turns: she's alone. She gets up.\n\nINT. HALLWAY - NIGHT\n\nMary walks down the hall to Sam's nursery. John, seen only in silhouette, stands over Sam's crib.\n\nMary: John? Is he hungry?\n\nJohn turns his head.\n\nMan: Shhh.\n\nMary: All right.\n\nMary heads back down the hallway. The light by the stairs is flickering. Mary frowns and goes to tap at it till the light steadies.\n\nMary: Hm.\n\nMore flickering light is coming from downstairs: Mary investigates. A w*r movie is on TV and John has fallen asleep watching it. If John is here, Mary realizes, then the Man upstairs cannot be John and must be a danger. She runs back upstairs.\n\nMary: Sammy! Sammy!\n\nMary enters Sam's nursery and stops short.\n\nINT. LIVING ROOM - NIGHT\n\nUpstairs, Mary screams. John wakes up.\n\nJohn: Mary?\n\nJohn scrambles out of the chair.\n\nJohn: Mary!\n\nJohn runs upstairs.\n\nINT. NURSERY - NIGHT\n\nJohn bursts through the closed door of the nursery.\n\nJohn: Mary.\n\nThe room is quiet and appears empty except for Sam awake in his crib and John. John glances around and pushes down the side of Sam's crib.\n\nJohn: Hey, Sammy. You okay?\n\nSomething dark drips next to Sam. John touches it. Two more drops land on the back of John's hand. It looks like blood. John looks up. Mary is sprawled across the ceiling, the stomach of her nightgown red with blood, staring at John and struggling to breathe. John collapses onto the floor, staring at Mary.\n\nJohn: No! Mary!\n\nMary bursts into flame. The fire spreads over the ceiling. John stares, frozen. Sam wails. John, reminded he's not alone, gets up and scoops Sam out of his crib and rushes out of the room.\n\nINT. HALLWAY - NIGHT\n\nDean is awake and coming to investigate.\n\nDean: Daddy!\n\nJohn shoves Sam at Dean.\n\nJohn: Take your brother outside as fast as you can and don't look back! Now, Dean, go!\n\nDean turns and runs. John turns back to the nursery.\n\nJohn: Mary!\n\nThe entire room is on fire. Mary herself can barely be seen.\n\nJohn: No!\n\nEXT. HOUSE - NIGHT\n\nDean runs outside, holding Sam.\n\nDean: It's okay, Sammy.\n\nDean turns to look up at Sam's window, which is lit with gold.\n\nJohn runs outside, scoops up Dean and Sam, and carries them both away.\n\nJohn: I gotcha.\n\nFire explodes out of Sam's nursery window.\n\nEXT. HOUSE - NIGHT, LATER\n\nThe Lawrence fire department has arrived. A FIREFIGHTER gets out of a fire truck and takes over at the gauges for another firefighter.\n\nFirefighter: I got it. You go hold the line up.\n\nThe second firefighter goes to the back of the truck and takes a hose from a third firefighter. That firefighter takes the hose towards the house where a fourth firefighter is spraying through Sam's nursery window. A paramedic opens the back of an ambulance. A Police Officer waves some neighbors back.\n\nOfficer: Stay back. You have to stay back.\n\nAcross the street from the house, John and Dean sit on the hood of John's Impala, John holding Sam. John looks up at the remnants of the fire.\n\nACT ONE\n\nStanford University\n\nPresent Day\n\nIt is 31 Oct 2005.\n\n'Gasoline' by Ginger begins to play.\n\nAPARTMENT\n\nINT. BEDROOM - DAY\n\nYoung Woman: Sam!\n\nThe Young Woman, Jess, comes around a corner; she is wearing a sexy-nurse costume and adjusting her hat. The photo of Mary and John from earlier is on the dresser.\n\nJess: Get a move on, would you?\n\nMusic: I've been sh*t from a cannon\n\nJess: We were supposed to be there like fifteen minutes ago.\n\nJess walks off.\n\nJess: Sam!\n\nMusic: I'm a human cannonball\n\nJess: You coming or what?\n\nStarring\n\nJARED PADALECKI\n\nA Young Man pokes his head around the corner; this is Sam. He's wearing jeans and three shirts, not a costume.\n\nSam: Do I have to?\n\nJess: Yes!\n\nMusic: I'm gonna fly high\n\nJess: It'll be fun.\n\nSam comes into the room.\n\nJess: And where's your costume?\n\nMusic: I'm gonna fall fall fall\n\nSam laughs and ducks his head.\n\nJENSEN ACKLES\n\nSam: You know how I feel about Halloween.\n\nPARTY\n\nINT. BAR - NIGHT\n\nClassic's 'What Cha Gonna Do' begins to play.\n\nMusic: Show me whatcha gonna do\n\nYeah whatcha gonna do\n\nAre you trying to get in\n\nYeah whatcha gonna do\n\nThe bar is decorated for Halloween (including a gargoyle with cobwebs and a baseball hat that says 'GET NAKED'). Someone pours someone else a sh*t. Everyone is in costume.\n\nGuest Starring\n\nSarah SHAHI\n\nMusic: Are you gonna ride\n\nJess raises a glass as a Young Man in a ghoul costume, Luis, comes up to the table where Sam and Jess are. Sam is still not in costume.\n\nJess: So here's to Sam-\n\nMusic: Baby\n\nADRIANNE PALICKI\n\nJess: -and his awesome LSAT victory.\n\nSam: All right, all right, it's not that big a deal.\n\nJess, Sam, and Luis clink glasses.\n\nJess: Yeah, he acts all humble.\n\nSamANTHA SMITH\n\nJess: But he scored a one seventy-four.\n\nLuis drinks his sh*t and so does Sam.\n\nLuis: Is that good?\n\nJEFFREY Dean MORGAN\n\nJess: Scary good.\n\nJess drinks.\n\nLuis: So there you go. You are a first-round draft pick. You can go to any law school you want!\n\nLuis sits next to Sam.\n\nR.D. CALL\n\nSam: Actually, I got an interview here. Monday. If it goes okay I think I got a sh*t at a full ride next year.\n\nJess: Hey. It's gonna go great.\n\nSam: It better.\n\nROSS KOHN\n\nLuis: How does it feel to be the golden boy of your family?\n\nSam: Ah, they don't know.\n\nLuis: Oh, no, I would be gloating! Why not?\n\nSam: Because we're not exactly the Bradys.\n\nLuis: And I'm not exactly the Huxtables. More shots?\n\nJess and Sam speak in chorus.\n\nJess and Sam: No. No.\n\nSam: No.\n\nLuis goes up to the bar anyway.\n\nJess: No, seriously. I'm proud of you. And you're gonna knock 'em dead on Monday-\n\nand\n\nSTEVE RAILSBACK\n\nJess: -and you're gonna get that full ride. I know it.\n\nSam: What would I do without you?\n\nJess: Crash and burn.\n\nJess smiles and pulls Sam in for a kiss.\n\nMusic: Are you trying to get in\n\nYeah whatcha gonna do\n\nAPARTMENT\n\nINT. BEDROOM - NIGHT\n\nMusic: Are you gonna ride baby\n\nSupervising Producer\n\nPETER JohnSON\n\nSam and Jess lie in bed, asleep back to back. Jess shifts position.\n\nExecutive Producer\n\nMcG\n\nA sound outside the room, like a window opening. Sam opens his eyes.\n\nINT. APARTMENT - NIGHT\n\nSam leaves the bedroom and looks around the apartment.\n\nExecutive Producer\n\nDAVID NUTTER\n\nA window is open; earlier it must have been closed. Footsteps. A Man walks past the strings of beads at the far end of the hall. Sam moves to another part of the apartment and waits. The Man enters the room. Sam lunges forward and grabs the Man at the shoulder. The Man knocks Sam's arm away and aims a strike at Sam, who ducks. The Man grabs Sam's arm, swings him around, and shoves him back. Sam kicks and is blocked, then pushed back into another room. If the Man hadn't seen Sam's face before, he sees it now; Sam gets his first glimpse of the Man. The Man elbows Sam in the face; Sam kicks at his head. The Man ducks and swings and Sam blocks. The Man knocks Sam down and pins him to the floor, one hand at Sam's neck and the other holding Sam's wrist.\n\nMan: Whoa, easy, tiger.\n\nSam breathes hard.\n\nSam: Dean?\n\nDean laughs.\n\nSam: You scared the crap out of me!\n\nDean: That's 'cause you're out of practice.\n\nSam grabs Dean's hand and yanks, slamming his heel into Dean's back and Dean to the floor.\n\nDean: Or not.\n\nSam taps Dean twice where Sam is holding him.\n\nDean: Get off of me.\n\nSam rolls to his feet and pulls Dean up.\n\nSam: What the hell are you doing here?\n\nDean: Well, I was looking for a beer.\n\nProduced by\n\nCYRUS YAVNEH\n\nDean puts his hands on Sam's shoulders, shakes once, and lets go.\n\nSam: What the hell are you doing here?\n\nDean: Okay. All right. We gotta talk.\n\nCreated by\n\nERIC KRIPKE\n\nSam: Uh, the phone?\n\nDean: If I'd'a called, would you have picked up?\n\nJess turns the light on. She is wearing very short shorts and a cropped Smurfs shirt.\n\nJess: Sam?\n\nSam and Dean turn their heads in unison.\n\nSam: Jess. Hey. Dean, this is my girlfriend, Jessica.\n\nDean looks at her appreciatively.\n\nJess: Wait, your brother Dean?\n\nJess smiles. Sam nods. Dean grins at her and moves closer.\n\nDean: Oh, I love the Smurfs. You know, I gotta tell you. You are completely out of my brother's league.\n\nJess: Just let me put something on.\n\nJess turns to go. Dean's voice stops her.\n\nWritten by\n\nERIC KRIPKE\n\nDean: No, no, no, I wouldn't dream of it. Seriously.\n\nDean goes back over to Sam without taking his eyes off Jess. Sam watches him, his expression stony.\n\nDean: Anyway, I gotta borrow your boyfriend here, talk about some private family business.\n\nDirected by\n\nDAVID NUTTER\n\nDean: But, uh, nice meeting you.\n\nSam: No.\n\nSam goes over to Jess and puts an arm around her.\n\nSam: No, whatever you want to say, you can say it in front of her.\n\nDean: Okay.\n\nDean turns to look at them both straight on.\n\nDean: Um. Dad hasn't been home in a few days.\n\nSam: So he's working overtime on a Miller Time shift. He'll stumble back in sooner or later.\n\nDean ducks his head and looks back up.\n\nDean: Dad's on a hunting trip. And he hasn't been home in a few days.\n\nSam's expression doesn't change while he takes this in. Jess glances up at him.\n\nSam: Jess, excuse us. We have to go outside.\n\nOUTSIDE APARTMENT\n\nINT. STAIRWELL - NIGHT\n\nSam and Dean head downstairs. Sam has put on jeans and a hoodie.\n\nSam: I mean, come on. You can't just break in, middle of the night, and expect me to hit the road with you.\n\nDean: You're not hearing me, Sammy. Dad's missing. I need you to help me find him.\n\nSam: You remember the poltergeist in Amherst? Or the Devil's Gates in Clifton? He was missing then, too. He's always missing, and he's always fine.\n\nDean stops and turns around. Sam stops too.\n\nDean: Not for this long. Now are you gonna come with me or not?\n\nSam: I'm not.\n\nDean: Why not?\n\nSam: I swore I was done hunting. For good.\n\nDean: Come on. It wasn't easy, but it wasn't that bad.\n\nDean starts downstairs again. Sam follows.\n\nSam: Yeah? When I told Dad I was scared of the thing in my closet, he gave me a .45.\n\nDean stops at the door to the outside.\n\nDean: Well, what was he supposed to do?\n\nSam: I was nine years old! He was supposed to say, don't be afraid of the dark.\n\nDean: Don't be afraid of the dark? Are you kidding me? Of course you should be afraid of the dark. You know what's out there.\n\nSam: Yeah, I know, but still. The way we grew up, after Mom was k*ll, and Dad's obsession to find the thing that k*ll her.\n\nDean glances outside.\n\nSam: But we still haven't found the damn thing. So we k*ll everything we canfind.\n\nDean: We save a lot of people doing it, too.\n\nA pause.\n\nSam: You think Mom would have wanted this for us?\n\nDean rolls his eyes and slams the door open.\n\nEXT. PARKING LOT - NIGHT\n\nThere's a short flight of stairs from the door to the parking lot. Dean and Sam climb it.\n\nSam: The w*apon training, and melting the silver into b*ll*ts? Man, Dean, we were raised like warriors.\n\nThey cross the parking lot to the Impala from the prologue.\n\nDean: So what are you gonna do? You're just gonna live some normal, apple pie life? Is that it?\n\nSam: No. Not normal. Safe.\n\nDean: And that's why you ran away.\n\nDean looks away.\n\nSam: I was just going to college. It was Dad who said if I was gonna go I should stay gone. And that's what I'm doing.\n\nDean: Yeah, well, Dad's in real trouble right now. If he's not dead already. I can feel it.\n\nSam is silent.\n\nDean: I can't do this alone.\n\nSam: Yes you can.\n\nDean looks down.\n\nDean: Yeah, well, I don't want to.\n\nSam sighs and looks down, thinking, then up.\n\nSam: What was he hunting?\n\nDean opens the trunk of the Impala, then the spare-tire compartment. It's an arsenal. He props the compartment open with a g*n and digs through the clutter.\n\nDean: All right, let's see, where the hell did I put that thing?\n\nSam: So when Dad left, why didn't you go with him?\n\nDean: I was working my own gig. This, uh, voodoo thing, down in New Orleans.\n\nSam: Dad let you go on a hunting trip by yourself?\n\nDean looks over at Sam.\n\nDean: I'm twenty-six, dude.\n\nDean pulls some papers out of a folder.\n\nDean: All right, here we go. So Dad was checking out this two-lane blacktop just outside of Jericho, California. About a month ago, this guy.\n\nDean hands one of the papers to Sam.\n\nDean: They found his car, but he vanished. Completely MIA.\n\nThe paper is a printout of an article from the Jericho Herald, headlined 'Centennial Highway Disappearance' and dated Sept. 19th 2005; it has a man's picture, captioned 'Andrew Carey MISSING'. Sam reads it and glances up.\n\nSam: So maybe he was kidnapped.\n\nDean: Yeah. Well, here's another one in April.\n\nDean tosses down another Jericho Heraldarticle for each date he mentions.\n\nDean: Another one in December 'oh-four, 'oh-three, 'ninety-eight, 'ninety-two, ten of them over the past twenty years.\n\nDean takes the article back from Sam and picks up the rest of the stack, putting them back in the folder.\n\nDean: All men, all the Same five-mile stretch of road.\n\nDean pulls a bag out of another part of the arsenal.\n\nDean: It started happening more and more, so Dad went to go dig around. That was about three weeks ago. I hadn't heard from him since, which is bad enough.\n\nDean grabs a handheld tape recorder.\n\nDean: Then I get this voicemail yesterday.\n\nHe presses play. The recording is staticky and the signal was clearly breaking up.\n\nJohn: Dean...something big is starting to happen...I need to try and figure out what's going on. It may... Be very careful, Dean. We're all in danger.\n\nDean presses stop.\n\nSam: You know there's EVP on that?\n\nDean: Not bad, Sammy. Kinda like riding a bike, isn't it?\n\nSam shakes his head.\n\nDean: All right. I slowed the message down, I ran it through a gold wave, took out the hiss, and this is what I got.\n\nHe presses play again.\n\nWoman: I can never go home...\n\nDean presses stop.\n\nSam: Never go home.\n\nDean drops the recorder, puts down the g*n, stands straight, and shuts the trunk, then leans on it.\n\nDean: You know, in almost two years I've never bothered you, never asked you for a thing.\n\nSam looks away and sighs, then looks back.\n\nSam: All right. I'll go. I'll help you find him.\n\nDean nods.\n\nSam: But I have to get back first thing Monday. Just wait here.\n\nSam turns to go back to the apartment. He turns back when Dean speaks.\n\nDean: What's first thing Monday?\n\nSam: I have this...I have an interview.\n\nDean: What, a job interview? Skip it.\n\nSam: It's a law school interview, and it's my whole future on a plate.\n\nDean: Law school?\n\nDean smirks.\n\nSam: So we got a deal or not?\n\nDean says nothing.\n\nAPARTMENT\n\nINT. BEDROOM - NIGHT\n\nSam is packing a duffel bag. He pulls out a large hook-shaped knife and slides it inside. Jess comes into the room.\n\nJess: Wait, you're taking off?\n\nSam looks up.\n\nSam: Is this about your dad? Is he all right?\n\nSam: Yeah. You know, just a little family drama.\n\nSam goes over to the dresser and turns on the lamp atop it.\n\nJess: Your brother said he was on some kind of hunting trip.\n\nJess sits on the bed. Sam rummages in one of the drawers and comes out with a couple shirts, which go in the duffel.\n\nSam: Oh, yeah, he's just deer hunting up at the cabin, he's probably got Jim, Jack, and José along with him. I'm just going to go bring him back.\n\nJess: What about the interview?\n\nSam: I'll make the interview. This is only for a couple days.\n\nSam goes around the bed. Jess gets up and follows.\n\nJess: Sam, I mean, please.\n\nSam stops and turns.\n\nJess: Just stop for a second. You sure you're okay?\n\nSam laughs a little.\n\nSam: I'm fine.\n\nJess: It's just...you won't even talk about your family. And now you're taking off in the middle of the night to spend a weekend with them? And with Monday coming up, which is kind of a huge deal.\n\nSam: Hey. Everything's going to be okay. I will be back in time, I promise.\n\nHe kisses her on the cheek and leaves.\n\nJess: At least tell me where you're going.\n"
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-summscreen-bestval-100-genlen-10-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summscreen-bestval-100-genlen-10-epochs
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the SummScreen dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0979
- Rouge1: 31.5373
- Rouge2: 6.6821
- Rougel: 18.6754
- Rougelsum: 27.4448
- Gen Len: 80.1927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.4849 | 0.99 | 3500 | 3.2071 | 28.6828 | 5.2634 | 17.218 | 25.487 | 94.059 |
| 3.2933 | 1.99 | 7000 | 3.1329 | 29.9774 | 5.7038 | 17.7705 | 26.2492 | 88.2358 |
| 3.1088 | 2.98 | 10500 | 3.1010 | 29.6903 | 5.6976 | 17.7468 | 25.9472 | 81.3129 |
| 2.9605 | 3.98 | 14000 | 3.0811 | 30.2088 | 6.1092 | 18.157 | 26.3051 | 77.8844 |
| 2.8778 | 4.97 | 17500 | 3.0747 | 30.6996 | 6.3038 | 18.4725 | 26.8669 | 81.6168 |
| 2.788 | 5.97 | 21000 | 3.0896 | 30.7478 | 6.4468 | 18.3755 | 26.8789 | 85.6395 |
| 2.7218 | 6.96 | 24500 | 3.0961 | 30.994 | 6.4407 | 18.4929 | 26.9802 | 79.1315 |
| 2.6753 | 7.96 | 28000 | 3.0892 | 31.336 | 6.6768 | 18.8122 | 27.389 | 83.2313 |
| 2.5753 | 8.95 | 31500 | 3.0960 | 31.3248 | 6.4093 | 18.6552 | 27.2087 | 80.1474 |
| 2.5918 | 9.95 | 35000 | 3.0979 | 31.5373 | 6.6821 | 18.6754 | 27.4448 | 80.1927 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Donghyun/L2_BERT
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Dongjae/mrc2reader
|
[
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- roc_auc
model-index:
- name: distilbert-base-uncased-reviews_multilabel_clf_v2
results: []
language:
- en
pipeline_tag: text-classification
---
# distilbert-base-uncased-reviews_multilabel_clf_v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
It achieves the following results on the evaluation set:
- Loss: 0.1519
- F1: 0.8697
- Roc Auc: 0.9107
- Accuracy: 0.5787
## Model description
This is a multilabel classification model of whether different aspects of a product are mentioned in reviews.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Multilabel%20Classification/Review%20Sentiments/Sentiments%20-%20Multilabel%20clf.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/mohamedziauddin/mh-uhack-sentiments?select=train.csv
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.6847 | 1.0 | 305 | 0.2425 | 0.7619 | 0.8209 | 0.3492 |
| 0.296 | 2.0 | 610 | 0.1786 | 0.8447 | 0.8847 | 0.5197 |
| 0.296 | 3.0 | 915 | 0.1634 | 0.8511 | 0.8937 | 0.5361 |
| 0.1476 | 4.0 | 1220 | 0.1544 | 0.8626 | 0.8999 | 0.5623 |
| 0.0986 | 5.0 | 1525 | 0.1490 | 0.8624 | 0.8994 | 0.5639 |
| 0.0986 | 6.0 | 1830 | 0.1521 | 0.8653 | 0.9041 | 0.5787 |
| 0.0686 | 7.0 | 2135 | 0.1511 | 0.8676 | 0.9110 | 0.5656 |
| 0.0686 | 8.0 | 2440 | 0.1501 | 0.8687 | 0.9104 | 0.5869 |
| 0.0525 | 9.0 | 2745 | 0.1519 | 0.8685 | 0.9089 | 0.5754 |
| 0.0432 | 10.0 | 3050 | 0.1519 | 0.8697 | 0.9107 | 0.5787 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.12.1
|
Dongmin/testmodel
|
[
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
}
| 11 | null |
---
license: bigscience-openrail-m
---
how to use
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import textwrap, time
MAX_NEW_TOKENS = 300
model_name = "acul3/bloomz-3b-Instruction"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
load_in_8bit= True
)
def generate_text(text):
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "User: " + text + "\n\Asisten: "
input_ids = tokenizer(text, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_length=MAX_NEW_TOKENS, pad_token_id=tokenizer.eos_token_id, do_sample=True, top_p=0.95, temperature=0.5, penalty_alpha=0.6, top_k=4, repetition_penalty=1.03, num_return_sequences=1)
result = textwrap.wrap(tokenizer.decode(generated_ids[0], skip_special_tokens=True), width=128)
result[0] = result[0].split("Asisten:")[-1]
return "\n".join(result)
print(generate_text("cara merebus telur"))
```
|
Doogie/Waynehills-KE-T5-doogie
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('SayhoKim/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Waynehillsdev/Waynehills-STT-doogie-server
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 61 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1761
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2158 | 1.0 | 40 | 2.1761 | 0.6209 |
| 2.1251 | 2.0 | 80 | 2.1767 | 0.6209 |
| 2.1362 | 3.0 | 120 | 2.1850 | 0.6209 |
| 0.0 | 4.0 | 160 | nan | 0.0384 |
| 0.0 | 5.0 | 200 | nan | 0.0384 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Waynehillsdev/waynehills_sentimental_kor
|
[
"pytorch",
"electra",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"ElectraForSequenceClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
---
license: apache-2.0
---
# Streaming zipformer for sherpa-ncnn
The torchscript model is from
https://huggingface.co/pfluo/k2fsa-zipformer-bilingual-zh-en-t
Different from https://huggingface.co/pfluo/k2fsa-zipformer-chinese-english-mixed, this
model is much smaller and thus it is faster to run.
The training code is from
https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless7_streaming
|
Doohae/p_encoder
|
[
"pytorch"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.19 +/- 0.91
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-4
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 44 | null |
The RSE-RoBERTa-base-STS is trained with 2 relations including:
1) entailment
2) duplicate_question
The RoBERTa-base model is used as initialization.
It can be used ideally for STS datasets.
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### emicius Dreambooth model trained by raw-vitor with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
albert-base-v2
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4,785,283 | 2023-02-16T03:48:22Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1638
- F1: 0.8584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2938 | 1.0 | 715 | 0.1806 | 0.8238 |
| 0.1504 | 2.0 | 1430 | 0.1598 | 0.8469 |
| 0.0964 | 3.0 | 2145 | 0.1638 | 0.8584 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
albert-xlarge-v1
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 341 | 2023-02-16T03:54:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8340325557979527
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2723
- F1: 0.8340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5909 | 1.0 | 191 | 0.3404 | 0.7891 |
| 0.2594 | 2.0 | 382 | 0.2919 | 0.8152 |
| 0.1752 | 3.0 | 573 | 0.2723 | 0.8340 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
albert-xlarge-v2
|
[
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,973 | 2023-02-16T03:54:31Z |
---
language:
- en
license:
- mit
library_name: "pytorch"
multilinguality:
- monolingual
pretty_name: perturber
datasets:
- panda
tags:
- counterfactual
- perturb
- fairness
- nlp
- demographic
- diverse
- gender
- non-binary
- race
- age
metrics:
- bleu
---
# The Perturber
The perturber is a seq2seq controlled generation model that rewrites text along a specified demographic axis and attribute.
The perturber takes in (i) a source text snippet, (ii) a word in the snippet referring to a demographic group, and (iii) a new target demographic attribute, and generates a perturbed snippet that refers to the target demographic attribute, while preserving overall meaning.
- **Repository:** https://github.com/facebookresearch/ResponsibleNLP/
- **Paper:** https://aclanthology.org/2022.emnlp-main.646/
- **Point of Contact:** [email protected], [email protected], [email protected], [email protected]
- **License:** MIT
## Model Description
The perturber is a finetuned BART model (Lewis et al., 2020) with 24 layers, 1024 hidden size, 406M parameters, and 16 attention heads. To train the perturber in the original paper, we finetune BART on PANDA using the ParlAI library.
This model release is separately trained using the HuggingFace transformers library, with the same parameters as the ParlAI model.
### Uses
The perturber is intended for use by fairness researchers and engineers working on demographic debiasing applications. The perturber is a controllable generation model that given a word, target demographic attribute and input text, outputs text where the selected word and associated references are rewritten to the target demographic attribute. Control variables and the input text are separated by a <PERT_SEP> token.
## Examples
Below we show some example inputs and outputs for the perturber rewriting text along different demographic axes and attributes.
Model inputs follow the format `[selected_word][target_attribute] <PERT_SEP> [input_text]`, where `selected_word` is a word that contains demographic information, `target_attribute` is a demographic attribute such as "man" or "asian", and `input_text` is the text sequence to rewrite.
Currently the perturber supports text rewriting along three axes and several attributes:
- **gender:** `man`, `woman`, `non-binary`
- **race:** `black`, `white`, `asian`, `hispanic`, `native-american`, `pacific-islander`
- **age:** `child`, `young`, `middle-aged`, `senior`, `adult`
### Gender
_Input:_
`his, woman <PERT_SEP> Jack was passionate about rock climbing and his love for the sport was infectious to all men around him.`
_Output:_
`Jackie was passionate about rock climbing and her love for the sport was infectious to all men around her.`
<br/>
<br/>
_Input:_
`Alice, man <PERT_SEP> To her girlfriend Jen, Alice was a doting mother, loving girlfriend and talented actress.`
_Output:_
`To his girlfriend Jen, Alan was a doting father, loving partner and talented actor.`
<br/>
<br/>
_Input:_
`his, non-binary <PERT_SEP> Jack was passionate about rock climbing and his love for the sport was infectious to all men around him.`
_Output:_
`Jack was passionate about rock climbing and their love for the sport was infectious to all men around them.`
<br/>
<br/>
### Age
_Input:_
`child, senior <PERT_SEP> The young child is naive and his innocence must be protected at all costs.`
_Output:_
`The elderly person is naive and his innocence must be protected at all costs.`
<br/>
<br/>
### Race/Ethnicity
_Input:_
`Asian, black <PERT_SEP> The Asian students association often hosted anime nights and boba events on campus.`
_Output:_
`The Black students association often hosted anime nights and boba events on campus.`
## Bias, Risks & Limitations
Limitations of the perturber include inherent biases in demographic categorization, data sourcing and crowdsourced data collection, and the ambiguous nature of fairness and perturbability. Ambiguous instances include names, where annotators may have different preconceptions about whether they contain ethnic information. Our crowdworkers and researchers are primarily English speaking and US-based, which may introduce additional cultural biases.
For an in-depth discussion of bias, risks and limitations, see the Limitations section of [our paper](https://aclanthology.org/2022.emnlp-main.646/).
## Citation
```
@inproceedings{qian-etal-2022-perturbation,
title = "Perturbation Augmentation for Fairer {NLP}",
author = "Qian, Rebecca and
Ross, Candace and
Fernandes, Jude and
Smith, Eric Michael and
Kiela, Douwe and
Williams, Adina",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.646",
pages = "9496--9521",
abstract = "Unwanted and often harmful social biases are becoming ever more salient in NLP research, affecting both models and datasets. In this work, we ask whether training on demographically perturbed data leads to fairer language models. We collect a large dataset of human annotated text perturbations and train a neural perturbation model, which we show outperforms heuristic alternatives. We find that (i) language models (LMs) pre-trained on demographically perturbed corpora are typically more fair, and (ii) LMs finetuned on perturbed GLUE datasets exhibit less demographic bias on downstream tasks, and (iii) fairness improvements do not come at the expense of performance on downstream tasks. Lastly, we discuss outstanding questions about how best to evaluate the (un)fairness of large language models. We hope that this exploration of neural demographic perturbation will help drive more improvement towards fairer NLP.",
}
```
### Model Card Contact
Thanks to [@Rebecca-Qian](https://github.com/Rebecca-Qian) for adding this model.
|
albert-xxlarge-v2
|
[
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 42,640 | 2023-02-16T03:57:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8322368421052632
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2369
- F1: 0.8322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8113 | 1.0 | 70 | 0.3088 | 0.7546 |
| 0.259 | 2.0 | 140 | 0.2541 | 0.8155 |
| 0.1791 | 3.0 | 210 | 0.2369 | 0.8322 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bert-base-cased-finetuned-mrpc
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11,644 | 2023-02-16T04:00:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6991051454138703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3926
- F1: 0.6991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1415 | 1.0 | 50 | 0.5404 | 0.5163 |
| 0.5045 | 2.0 | 100 | 0.4347 | 0.6498 |
| 0.371 | 3.0 | 150 | 0.3926 | 0.6991 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bert-base-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,621,271 | 2023-02-16T04:03:18Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1733
- F1: 0.8522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3023 | 1.0 | 835 | 0.1913 | 0.8049 |
| 0.1568 | 2.0 | 1670 | 0.1705 | 0.8422 |
| 0.1017 | 3.0 | 2505 | 0.1733 | 0.8522 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bert-base-german-dbmdz-cased
|
[
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,814 | 2023-02-16T04:11:21Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- gjbooth2/autotrain-data-glenn_epa_second_pooled_25
co2_eq_emissions:
emissions: 0.02021601897058404
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3519195196
- CO2 Emissions (in grams): 0.0202
## Validation Metrics
- Loss: 1.733
- Accuracy: 0.534
- Macro F1: 0.343
- Micro F1: 0.534
- Weighted F1: 0.473
- Macro Precision: 0.371
- Micro Precision: 0.534
- Weighted Precision: 0.477
- Macro Recall: 0.375
- Micro Recall: 0.534
- Weighted Recall: 0.534
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gjbooth2/autotrain-glenn_epa_second_pooled_25-3519195196
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gjbooth2/autotrain-glenn_epa_second_pooled_25-3519195196", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gjbooth2/autotrain-glenn_epa_second_pooled_25-3519195196", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
bert-base-german-dbmdz-uncased
|
[
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 68,305 | 2023-02-16T04:11:25Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: KLUE-BERT-BASE-NER-kluedata
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: Precision
type: precision
value: 0.7925285792052259
- name: Recall
type: recall
value: 0.8169320333871081
- name: F1
type: f1
value: 0.8045452975512036
- name: Accuracy
type: accuracy
value: 0.9598270318217097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLUE-BERT-BASE-NER-kluedata
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Precision: 0.7925
- Recall: 0.8169
- F1: 0.8045
- Accuracy: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 656
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2252 | 1.0 | 329 | 0.2217 | 0.5880 | 0.6798 | 0.6306 | 0.9262 |
| 0.1414 | 2.0 | 658 | 0.1665 | 0.7082 | 0.7468 | 0.7270 | 0.9476 |
| 0.0993 | 3.0 | 987 | 0.1469 | 0.7405 | 0.7873 | 0.7632 | 0.9542 |
| 0.0617 | 4.0 | 1316 | 0.1522 | 0.7534 | 0.8149 | 0.7830 | 0.9556 |
| 0.0448 | 5.0 | 1645 | 0.1630 | 0.7804 | 0.8042 | 0.7922 | 0.9585 |
| 0.0321 | 6.0 | 1974 | 0.1765 | 0.7811 | 0.8173 | 0.7988 | 0.9586 |
| 0.0227 | 7.0 | 2303 | 0.1810 | 0.7871 | 0.8136 | 0.8001 | 0.9594 |
| 0.017 | 8.0 | 2632 | 0.1929 | 0.7895 | 0.8176 | 0.8033 | 0.9603 |
| 0.0147 | 9.0 | 2961 | 0.1983 | 0.7956 | 0.8196 | 0.8074 | 0.9601 |
| 0.0114 | 10.0 | 3290 | 0.2098 | 0.7925 | 0.8169 | 0.8045 | 0.9598 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
bert-base-multilingual-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 328,585 | 2023-02-16T04:16:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/4445/corneos-black-cat-dva-embedding
|
bert-base-uncased
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 59,663,489 | 2023-02-16T04:17:28Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/4186/tifa-lockharts-classic-outfit-by-corneo
|
bert-large-cased-whole-word-masking-finetuned-squad
|
[
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,214 | 2023-02-16T04:18:24Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/4888/selen-tatsuki-by-justneutral
|
bert-large-cased-whole-word-masking
|
[
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2,316 | 2023-02-16T04:19:37Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/4661/corneos-aerith-ff7-embedding
|
bert-large-cased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 388,769 | 2023-02-16T04:20:23Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5323/chika-fujiwara-ti
|
bert-large-uncased-whole-word-masking
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 76,685 | 2023-02-16T04:21:01Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5417/yor-forger-innocent-housewife-version-ti-embedding-by-corneo
|
bert-large-uncased
|
[
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 1,058,496 | 2023-02-16T04:21:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5416/yor-forger-thorn-princess-version-ti-embedding-by-corneo
|
ctrl
|
[
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
] | null |
{
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17,007 | 2023-02-16T04:22:42Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/5895/hayase-yuuka
|
distilbert-base-cased
|
[
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null |
{
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 574,859 | 2023-02-16T04:23:54Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/8916/shadowverse-piercye-embed
|
distilbert-base-german-cased
|
[
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 43,667 | 2023-02-16T04:26:27Z |
---
library_name: diffusers
base_model: runwayml/stable-diffusion-v1-5
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
distilbert-base-multilingual-cased
|
[
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8,339,633 | 2023-02-16T04:26:44Z |
---
license: apache-2.0
datasets:
- noahshinn024/ts-code2td
language:
- en
pipeline_tag: translation
---
|
AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2
|
[
"pytorch",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers"
] |
sentence-similarity
|
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 13 | 2023-02-16T09:07:17Z |
The RSE-RoBERTa-large-Transfer is trained with 2 relations including:
1) entailment
2) paraphrase
The RoBERTa-large model is used as initialization.
It can be used ideally for Transfer datasets - (Downstream Tasks).
|
AKulk/wav2vec2-base-timit-demo-colab
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-16T09:30:39Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- davanstrien/autotrain-data-dataset-mentions-160223
co2_eq_emissions:
emissions: 0.12753465619151655
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 3522695252
- CO2 Emissions (in grams): 0.1275
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-dataset-mentions-160223-3522695252
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-dataset-mentions-160223-3522695252", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-dataset-mentions-160223-3522695252", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Aakansha/hs
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-16T10:59:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9332978051607186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1634
- Accuracy: 0.933
- F1: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.1807 | 0.9295 | 0.9291 |
| No log | 2.0 | 500 | 0.1634 | 0.933 | 0.9333 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
AbdulmalikAdeyemo/wav2vec2-large-xls-r-300m-hausa
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### fine_tuned_50 Dreambooth model trained by sid229 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AdapterHub/bert-base-uncased-pf-hellaswag
|
[
"bert",
"en",
"dataset:hellaswag",
"arxiv:2104.08247",
"adapter-transformers",
"adapterhub:comsense/hellaswag"
] | null |
{
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: michal512/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AiPorter/DialoGPT-small-Back_to_the_future
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Akashpb13/Kabyle_xlsr
|
[
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"kab",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sw",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.97 +/- 1.06
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar/bert-srb-ner-setimes
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.13 +/- 9.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar/bert-srb-ner
|
[
"pytorch",
"bert",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
library_name: rl-algo-impls
tags:
- CarRacing-v0
- ppo
- deep-reinforcement-learning
- reinforcement-learning
model-index:
- name: ppo
results:
- metrics:
- type: mean_reward
value: 777.12 +/- 224.5
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo.
All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/v4wd7cp5.
## Training Results
This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [e47a44c](https://github.com/sgoodfriend/rl-algo-impls/tree/e47a44c4d891f48885af0b1605b30d19fc67b5af). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std).
| algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url |
|:-------|:-------------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------|
| ppo | CarRacing-v0 | 1 | 777.12 | 224.504 | 16 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ojq3cif0) |
| ppo | CarRacing-v0 | 2 | 613.084 | 175.573 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/7wtzr60u) |
| ppo | CarRacing-v0 | 3 | 668.535 | 181.695 | 16 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/4l0zprbu) |
### Prerequisites: Weights & Biases (WandB)
Training and benchmarking assumes you have a Weights & Biases project to upload runs to.
By default training goes to a rl-algo-impls project while benchmarks go to
rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best
models and the model weights are uploaded to WandB.
Before doing anything below, you'll need to create a wandb account and run `wandb
login`.
## Usage
/sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls
Note: While the model state dictionary and hyperaparameters are saved, the latest
implementation could be sufficiently different to not be able to reproduce similar
results. You might need to checkout the commit the agent was trained on:
[e47a44c](https://github.com/sgoodfriend/rl-algo-impls/tree/e47a44c4d891f48885af0b1605b30d19fc67b5af).
```
# Downloads the model, sets hyperparameters, and runs agent for 3 episodes
python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/ojq3cif0
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb)
notebook.
## Training
If you want the highest chance to reproduce these results, you'll want to checkout the
commit the agent was trained on: [e47a44c](https://github.com/sgoodfriend/rl-algo-impls/tree/e47a44c4d891f48885af0b1605b30d19fc67b5af). While
training is deterministic, different hardware will give different results.
```
python train.py --algo ppo --env CarRacing-v0 --seed 1
```
Setup hasn't been completely worked out yet, so you might be best served by using Google
Colab starting from the
[colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb)
notebook.
## Benchmarking (with Lambda Labs instance)
This and other models from https://api.wandb.ai/links/sgoodfriend/v4wd7cp5 were generated by running a script on a Lambda
Labs instance. In a Lambda Labs instance terminal:
```
git clone [email protected]:sgoodfriend/rl-algo-impls.git
cd rl-algo-impls
bash ./lambda_labs/setup.sh
wandb login
bash ./lambda_labs/benchmark.sh
```
### Alternative: Google Colab Pro+
As an alternative,
[colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb),
can be used. However, this requires a Google Colab Pro+ subscription and running across
4 separate instances because otherwise running all jobs will exceed the 24-hour limit.
## Hyperparameters
This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very
close and has some additional data:
```
algo: ppo
algo_hyperparams:
batch_size: 128
clip_range: 0.2
ent_coef: 0
gae_lambda: 0.95
gamma: 0.99
learning_rate: 0.0001
learning_rate_decay: linear
max_grad_norm: 0.5
n_epochs: 10
n_steps: 512
sde_sample_freq: 4
vf_coef: 0.5
env: impala-CarRacing-v0
env_hyperparams:
frame_stack: 4
n_envs: 8
env_id: CarRacing-v0
n_timesteps: 4000000
policy_hyperparams:
activation_fn: relu
cnn_feature_dim: 256
cnn_layers_init_orthogonal: false
cnn_style: impala
hidden_sizes: []
init_layers_orthogonal: true
log_std_init: -2
share_features_extractor: false
use_sde: true
seed: 1
use_deterministic_algorithms: true
wandb_entity: null
wandb_project_name: rl-algo-impls-benchmarks
wandb_tags:
- benchmark_e47a44c
- host_129-146-2-230
```
|
Aleksandar/distilbert-srb-base-cased-oscar
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 4 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Aleksandar/distilbert-srb-ner-setimes
|
[
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | 2023-02-16T19:46:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 234.86 +/- 19.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar/distilbert-srb-ner
|
[
"pytorch",
"distilbert",
"token-classification",
"sr",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -140.27 +/- 104.61
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Aleksandar/electra-srb-ner-setimes-lr
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: creativeml-openrail-m
---

|
Aleksandar/electra-srb-ner
|
[
"pytorch",
"safetensors",
"electra",
"token-classification",
"dataset:wikiann",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"ElectraForTokenClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Parfait Mix
Another sweet anime mix for you and me.
# *(Also contains NSFW!)*
## Examples
<img src="https://huggingface.co/sleepotimer/ParfaitMix/resolve/main/example-1.png" width="768px">
```
1girl, hoodie, white hair, pink eyes, blush, from above, winter, snow, smile, night, arms behind back, starry night, trousers, grin
Negative prompt: (low quality, worst quality:1.4)
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2778728636, Size: 512x768, Model hash: 37ef0a3263, Model: ParfaitMix-fp16
```
<img src="https://huggingface.co/sleepotimer/ParfaitMix/resolve/main/example-2.png">
```
landscape, mountains, sunset, sky, clouds, valley, river, trees, nature, village, snow
Negative prompt: (low quality, worst quality:1.4)
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 4161234414, Size: 768x512, Model hash: 37ef0a3263, Model: ParfaitMix-fp16
```
|
Aleksandar/electra-srb-oscar
|
[
"pytorch",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] |
fill-mask
|
{
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
language:
- ar
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper_small_arabic_no_diacs_v1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 19.72417169903705
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_small_arabic_no_diacs_v1
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2283
- Wer: 19.7242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1617 | 0.39 | 1000 | 0.2592 | 22.7599 |
| 0.137 | 0.78 | 2000 | 0.2336 | 20.5925 |
| 0.0818 | 1.17 | 3000 | 0.2283 | 19.7242 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Aleksandar1932/distilgpt2-rock
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
|
Aleksandar1932/gpt2-rock-124439808
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 11 | null |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.39 +/- 16.12
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar1932/gpt2-spanish-classics
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 9 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.919
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5535
- Accuracy: 0.919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6646 | 0.99 | 62 | 2.4866 | 0.845 |
| 1.8188 | 1.99 | 124 | 1.7292 | 0.898 |
| 1.5637 | 2.99 | 186 | 1.5535 | 0.919 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Aleksandra/distilbert-base-uncased-finetuned-squad
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | 2023-02-16T20:33:46Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 779.89 +/- 71.24
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlekseyKorshuk/comedy-scripts
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 20 | null |
---
license: cc-by-nc-3.0
datasets:
- gsdf/EasyNegative
language:
- en
metrics:
- character
pipeline_tag: text-generation
tags:
- art
---
|
AlekseyKorshuk/horror-scripts
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 19 | null |
---
license: apache-2.0
---
# Model card for CLAP
Model card for CLAP: Contrastive Language-Audio Pretraining

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
The abstract of the paper states that:
> Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
# Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-fused")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-fused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-fused").to(0)
processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
```
# Citation
If you are using this model for your work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
AlekseyKulnevich/Pegasus-HeaderGeneration
|
[
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- nicoco404/autotrain-data-aita-post-classifier
co2_eq_emissions:
emissions: 13.203921634602377
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 3535895495
- CO2 Emissions (in grams): 13.2039
## Validation Metrics
- Loss: 0.761
- Accuracy: 0.763
- Macro F1: 0.124
- Micro F1: 0.763
- Weighted F1: 0.661
- Macro Precision: 0.109
- Micro Precision: 0.763
- Weighted Precision: 0.583
- Macro Recall: 0.143
- Micro Recall: 0.763
- Weighted Recall: 0.763
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/nicoco404/autotrain-aita-post-classifier-3535895495
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nicoco404/autotrain-aita-post-classifier-3535895495", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nicoco404/autotrain-aita-post-classifier-3535895495", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
AlekseyKulnevich/Pegasus-QuestionGeneration
|
[
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
|
{
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 17 | null |
---
license: apache-2.0
---
# Model card for CLAP
Model card for CLAP: Contrastive Language-Audio Pretraining

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
The abstract of the paper states that:
> Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
# Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0)
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
```
# Citation
If you are using this model for your work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Alerosae/SocratesGPT-2
|
[
"pytorch",
"gpt2",
"feature-extraction",
"en",
"transformers",
"text-generation"
] |
text-generation
|
{
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 7 | null |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 177.50 +/- 66.17
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'chqmatteo/ppo-clearrl-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Alessandro/model_name
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
Experimental 12B instruction and retrieval tuned model based on pythia-12B-deduped
--------------------------------------------------
# Model may create undesirable content, use at your own risk.
Finetuned on a variety of instruction datasets.
See: https://github.com/Rallio67/language-model-agents
# Thanks to LAION contributors and Stability.ai
for help building datasets and compute resources.
# Prompt the model by typing:
User: followed by your question. The agent will reply as Chip \
Chip is loosely inspired by the fictional character Chip the robot. \
see: https://en.wikipedia.org/wiki/Not_Quite_Human_(film))
# For multiple rounds of dialogue:
After your question add two new lines followed by Chip:
(\n\nChip:)
# Example code snippet to run on your own system:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
chip_map= {'gpt_neox.embed_in': 0,
'gpt_neox.layers': 0,
'gpt_neox.final_layer_norm': 0,
'embed_out': 0}
name = "Rallio67/chip2_12B_retrieval_alpha"
model = AutoModelForCausalLM.from_pretrained(name, device_map=chip_map, torch_dtype=torch.float16, load_in_8bit=True )
tokenizer = AutoTokenizer.from_pretrained(name)
def generate_from_model(model, tokenizer):
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(
input_ids=encoded_input['input_ids'].cuda(0),
do_sample=True,
max_new_tokens=35,
num_return_sequences=1,
top_p=0.95,
temperature=0.5,
penalty_alpha=0.6,
top_k=4,
output_scores=True,
return_dict_in_generate=True,
repetition_penalty=1.03,
eos_token_id=0,
use_cache=True
)
gen_sequences = output_sequences.sequences[:, encoded_input['input_ids'].shape[-1]:]
for sequence in gen_sequences:
new_line=tokenizer.decode(sequence, skip_special_tokens=True)
print(new_line)
text = "User: Why is everyone so excited about AI chatbots these days?"
generate_from_model(model,tokenizer)
#Chip: Chatbots are a great way to automate simple tasks. They can help you save time and make your life easier. For example, they can
```
Sampling Settings:
-------
top_p=0.95,
temperature=0.5,
penalty_alpha=0.6,
top_k=4,
repetition_penalty=1.03
Example Generations:
---------------
>User: Who was the last man on the moon?
--[Apollo 17 commander Eugene Cernan is covered in lunar dust after the mission's second moonwalk. On December 14, 1972, Cernan took his final steps on the moon and no one has been back since.]--
Chip2: The last man on the moon was Eugene Cernan. He was the commander of the Apollo 17 mission.
Chip2: The last man on the moon was Eugene Cernan.
Chip2: The last man on the moon was Eugene Cernan. He was the commander of the Apollo 17 mission.
Chip2: The last man on the moon was Eugene Cernan. He was the commander of Apollo 17.
|
AlexDemon/Alex
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: mit
---
### Tim Sale on Stable Diffusion
This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
AlexN/xls-r-300m-pt
|
[
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
|
{
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 15 | null |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1509.92 +/- 130.87
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlexaMerens/Owl
|
[
"license:cc"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.86 +/- 0.59
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlexaRyck/KEITH
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -57.43 +/- 61.21
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'chqmatteo/ppo-clearrl-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Alexander-Learn/bert-finetuned-ner
|
[
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
|
{
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
license: odc-by
language:
- en
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is fine tunned on GPT2 to generate text following the writings of W. E. Burghardt Du Bois
# Model Details
## Model Description
The model is designed to be finned tunning with writting from Historical black black writers who wrote on freedom and emancipation. This first version has GPT2
fintunned with the writings of W. E. Burghardt Du Bois.
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed] English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
https://www.gutenberg.org/files/15210/15210-h/15210-h.htm
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The models can be used as a resource to the study of Black writers on freedom and emancipation.
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The data used in the training consist of the writings of W. E. Burghardt Du Bois. The DarkWater obtained from project Gutenberg was used. Specifiically, the chapters used are below
THE SHADOW OF the YEAR, Litany at Atlanta, THE SOULS OF WHITE FOLK, The Riddle of the Sphinx, THE HANDS OF ETHIOPIA, The Princess of the Hither Isles
OF WORK AND WEALTH, Second Coming, THE SERVANT IN THE HOUSE, Jesus Christ in Texas, OF THE RULING OF MEN, The Call and THE DAMNATION OF WOMEN. About 50,000 word token was used in the training.
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing [optional]
[More Information Needed]
### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
- Num examples = 1005
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 378
Number of trainable parameters = 124439808
### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
AlexeyYazev/my-awesome-model
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: fgmckee/poca-SoccerTwos100m
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Alfia/anekdotes
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
license: bsd-3-clause
language:
- en
metrics:
- accuracy : 99.38
pipeline_tag: token-classification
---
Language indentification system using token-level classification
Transformer based language detection model trained predict the language of the text at segment-level. This helps us in classifying regions
of texts into specific languages.
Using this system we can also compute the percentage of text belongs to the languages present in the text.
```
from transformers.models.roberta.modeling_roberta import RobertaForTokenClassification
from transformers import (
AutoTokenizer)
import torch
### Labels
id2label = {0: 'ar',
1: 'bg',
2: 'de',
3: 'el',
4: 'en',
5: 'es',
6: 'fr',
7: 'hi',
8: 'it',
9: 'ja',
10: 'nl',
11: 'pl',
12: 'pt',
13: 'ru',
14: 'sw',
15: 'th',
16: 'tr',
17: 'ur',
18: 'vi',
19: 'zh'}
# load model
model = RobertaForTokenClassification.from_pretrained('krishnadn94/segmental_langid')
tokenizer = AutoTokenizer.from_pretrained('krishnadn94/segmental_langid')
sentence = "Vacaciones de Navidad en España. Where is my money?"
tokens = tokenizer(sentence, return_tensors='pt')
# Run inference
with torch.no_grad():
output = model(**tokens)
pred_ids = torch.argmax(output.logits, axis=-1).cpu().numpy()
labels = [id2label[i] for i in pred_ids[0]]
# percentage of each language
lang_percentage = {item:labels.count(item)/len(labels) for item in list(set(labels))}
print(f'Langauages: {lang_percentage}')
```
Examples:
```
sentence = "Vacaciones de Navidad en España. Where is my money?"
Prediction:
Langauages: {'en': 0.42857142857142855, 'es': 0.5714285714285714}
```
```
sentence = "Nomes de pessoas que nascem no Brasil"
Prediction:
Langauages: {'pt': 1.0}
```
```
sentence = "My name is Krishna. नव वर्ष २०२३ में आपका स्वागत है"
Prediction:
Langauages: {'en': 0.26666666666666666, 'hi': 0.7333333333333333}
```
|
AlgoveraAI/dcgan
|
[
"pytorch",
"transformers"
] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
license: apache-2.0
datasets:
- allenai/scirepeval
language:
- en
---
# SPECTER 2.0
<!-- Provide a quick summary of what the model is/does. -->
SPECTER 2.0 is the successor to [SPECTER](allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/specter-2_).
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
# Model Details
## Model Description
SPECTER 2.0 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation).
Post that it is trained on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks, with task format specific adapters.
Task Formats trained on:
- Classification
- Regression
- Proximity
- Adhoc Search
It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well.
- **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
- **Shared by :** Allen AI
- **Model type:** bert-base-uncased + adapters
- **License:** Apache 2.0
- **Finetuned from model:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased).
## Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/allenai/SPECTER2_0](https://github.com/allenai/SPECTER2_0)
- **Paper:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137)
- **Demo:** [Usage](https://github.com/allenai/SPECTER2_0/blob/main/README.md)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
|Model|Type|Name and HF link|
|--|--|--|
|Base|Transformer|[allenai/specter2](https://huggingface.co/allenai/specter2)|
|Classification|Adapter|[allenai/specter2_classification](https://huggingface.co/allenai/specter2_classification)|
|Regression|Adapter|[allenai/specter2_regression](https://huggingface.co/allenai/specter2_regression)|
|Retrieval|Adapter|[allenai/specter2_proximity](https://huggingface.co/allenai/specter2_proximity)|
|Adhoc Query|Adapter|[allenai/specter2_adhoc_query](https://huggingface.co/allenai/specter2_adhoc_query)|
```python
from transformers import AutoTokenizer, AutoModel
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('allenai/specter2')
#load base model
model = AutoModel.from_pretrained('allenai/specter2')
#load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it
model.load_adapter("allenai/specter2_adhoc_query", source="hf", load_as="adhoc_query", set_active=True)
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
# concatenate title and abstract
text_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
# preprocess the input
inputs = self.tokenizer(text_batch, padding=True, truncation=True,
return_tensors="pt", return_token_type_ids=False, max_length=512)
output = model(**inputs)
# take the first token in the batch as the embedding
embeddings = output.last_hidden_state[:, 0, :]
```
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md).
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats.
All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval).
The citation link are triplets in the form
```json
{"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}}
```
consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation.
## Training Procedure
Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677).
### Training Hyperparameters
The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md):
- Base Model: First a base model is trained on the above citation triplets.
``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16```
- Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well.
``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16```
# Evaluation
We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset.
We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark.
|Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)|
|--|--|--|--|--|
|[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)|
|[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)|
|[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)|
|[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)|
|[SPECTER 2.0-base](https://huggingface.co/allenai/specter2)|56.3|58.0|69.2|(38.0, 32.4)|
|[SPECTER 2.0-Adapters](https://huggingface.co/models?search=allenai/specter-2)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**|
Please cite the following works if you end up using SPECTER 2.0:
[SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677):
```bibtex
@inproceedings{specter2020cohan,
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
booktitle={ACL},
year={2020}
}
```
[SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137)
```bibtex
@article{Singh2022SciRepEvalAM,
title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations},
author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman},
journal={ArXiv},
year={2022},
volume={abs/2211.13308}
}
```
|
AliPotter24/a
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.58 +/- 0.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alicanke/Wyau
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- conversational
---
#Pondweed DialoGPT Model2
|
Alireza1044/albert-base-v2-mrpc
|
[
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 204 | null |
---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
DRAGON-RoBERTa is a BERT-base sized dense retriever initialized from [RoBERTa](https://huggingface.co/roberta-base) and further trained on the data augmented from MS MARCO corpus, following the approach described in [How to Train Your DRAGON:
Diverse Augmentation Towards Generalizable Dense Retrieval](https://arxiv.org/abs/2302.07452).
<p align="center">
<img src="https://raw.githubusercontent.com/facebookresearch/dpr-scale/main/dragon/images/teaser.png" width="600">
</p>
The associated GitHub repository is available here https://github.com/facebookresearch/dpr-scale/tree/main/dragon. We use asymmetric dual encoder, with two distinctly parameterized encoders. The following models are also available:
Model | Initialization | MARCO Dev | BEIR | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
DRAGON+ | Shitao/RetroMAE| 39.0 | 47.4 | [facebook/dragon-plus-query-encoder](https://huggingface.co/facebook/dragon-plus-query-encoder) | [facebook/dragon-plus-context-encoder](https://huggingface.co/facebook/dragon-plus-context-encoder)
DRAGON-RoBERTa | RoBERTa-base | 39.4 | 47.2 | [facebook/dragon-roberta-query-encoder](https://huggingface.co/facebook/dragon-roberta-query-encoder) | [facebook/dragon-roberta-context-encoder](https://huggingface.co/facebook/dragon-roberta-context-encoder)
## Usage (HuggingFace Transformers)
Using the model directly available in HuggingFace transformers .
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('facebook/dragon-roberta-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/dragon-roberta-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/dragon-roberta-context-encoder')
# We use msmarco query and passages as an example
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 385.1422
score2 = query_emb @ ctx_emb[1] # 383.6051
```
|
Alireza1044/albert-base-v2-qqp
|
[
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 37 | null |
---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
DRAGON-RoBERTa is a BERT-base sized dense retriever initialized from [RoBERTa](https://huggingface.co/roberta-base) and further trained on the data augmented from MS MARCO corpus, following the approach described in [How to Train Your DRAGON:
Diverse Augmentation Towards Generalizable Dense Retrieval](https://arxiv.org/abs/2302.07452).
<p align="center">
<img src="https://raw.githubusercontent.com/facebookresearch/dpr-scale/main/dragon/images/teaser.png" width="600">
</p>
The associated GitHub repository is available here https://github.com/facebookresearch/dpr-scale/tree/main/dragon. We use asymmetric dual encoder, with two distinctly parameterized encoders. The following models are also available:
Model | Initialization | MARCO Dev | BEIR | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
DRAGON+ | Shitao/RetroMAE| 39.0 | 47.4 | [facebook/dragon-plus-query-encoder](https://huggingface.co/facebook/dragon-plus-query-encoder) | [facebook/dragon-plus-context-encoder](https://huggingface.co/facebook/dragon-plus-context-encoder)
DRAGON-RoBERTa | RoBERTa-base | 39.4 | 47.2 | [facebook/dragon-roberta-query-encoder](https://huggingface.co/facebook/dragon-roberta-query-encoder) | [facebook/dragon-roberta-context-encoder](https://huggingface.co/facebook/dragon-roberta-context-encoder)
## Usage (HuggingFace Transformers)
Using the model directly available in HuggingFace transformers .
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('facebook/dragon-roberta-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/dragon-roberta-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/dragon-roberta-context-encoder')
# We use msmarco query and passages as an example
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 385.1422
score2 = query_emb @ ctx_emb[1] # 383.6051
```
|
Alireza1044/albert-base-v2-rte
|
[
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 30 | null |
---
language: en
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: growth is strong and we have plenty of liquidity
duplicated_from: yiyanghkust/finbert-tone
---
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens.
- Corporate Reports 10-K & 10-Q: 2.5B tokens
- Earnings Call Transcripts: 1.3B tokens
- Analyst Reports: 1.1B tokens
More technical details on `FinBERT`: [Click Link](https://github.com/yya518/FinBERT)
This released `finbert-tone` model is the `FinBERT` model fine-tuned on 10,000 manually annotated (positive, negative, neutral) sentences from analyst reports. This model achieves superior performance on financial tone analysis task. If you are simply interested in using `FinBERT` for financial tone analysis, give it a try.
If you use the model in your academic work, please cite the following paper:
Huang, Allen H., Hui Wang, and Yi Yang. "FinBERT: A Large Language Model for Extracting Information from Financial Text." *Contemporary Accounting Research* (2022).
# How to use
You can use this model with Transformers pipeline for sentiment analysis.
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone')
nlp = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer)
sentences = ["there is a shortage of capital, and we need extra financing",
"growth is strong and we have plenty of liquidity",
"there are doubts about our finances",
"profits are flat"]
results = nlp(sentences)
print(results) #LABEL_0: neutral; LABEL_1: positive; LABEL_2: negative
```
|
Alireza1044/albert-base-v2-sst2
|
[
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] |
text-classification
|
{
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 52 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3524 |
| 2.7533 | 2.0 | 500 | 1.8430 |
| 2.7533 | 3.0 | 750 | 1.7545 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cpu
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Alireza1044/dwight_bert_lm
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers"
] |
text-generation
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 14 | null |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-go_emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-go_emotions
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1046
- Roc Auc: 0.8263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 39
- total_train_batch_size: 2496
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cpu
- Datasets 2.10.1
- Tokenizers 0.12.1
|
AllwynJ/HarryBoy
|
[
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] |
conversational
|
{
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 12 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Yagorka/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Ann2020/rubert-base-cased-finetuned-ner
|
[] | null |
{
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 0 | null |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_entertainment_expenses_out
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_textcat_entertainment_expenses_out` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `OTHER`, `5150 - Entertainment expenses` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 72.44 |
| `CATS_MICRO_P` | 92.95 |
| `CATS_MICRO_R` | 92.95 |
| `CATS_MICRO_F` | 92.95 |
| `CATS_MACRO_P` | 77.90 |
| `CATS_MACRO_R` | 69.07 |
| `CATS_MACRO_F` | 72.44 |
| `CATS_MACRO_AUC` | 91.08 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TEXTCAT_LOSS` | 554.75 |
|
AnonymousSub/AR_cline
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_transport_local_out
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_textcat_transport_local_out` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `textcat` |
| **Components** | `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `OTHER`, `5650 - Transport - local` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 82.63 |
| `CATS_MICRO_P` | 98.69 |
| `CATS_MICRO_R` | 98.69 |
| `CATS_MICRO_F` | 98.69 |
| `CATS_MACRO_P` | 87.11 |
| `CATS_MACRO_R` | 79.15 |
| `CATS_MACRO_F` | 82.63 |
| `CATS_MACRO_AUC` | 87.65 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TEXTCAT_LOSS` | 107.50 |
|
AnonymousSub/hier_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | 2023-02-17T05:29:40Z |
---
tags:
- BattleZone-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BattleZone-v5
type: BattleZone-v5
metrics:
- type: mean_reward
value: 50700.00 +/- 15026.98
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BattleZone-v5**
This is a trained model of a PPO agent playing BattleZone-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id BattleZone-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BattleZone-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/hier_triplet_epochs_1_shard_10
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- BankHeist-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BankHeist-v5
type: BankHeist-v5
metrics:
- type: mean_reward
value: 445.00 +/- 42.49
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BankHeist-v5**
This is a trained model of a PPO agent playing BankHeist-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id BankHeist-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BankHeist-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BankHeist-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 3 | 2023-02-17T05:32:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9487096774193549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3445
- Accuracy: 0.9487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4915 | 1.0 | 318 | 2.5863 | 0.7506 |
| 1.985 | 2.0 | 636 | 1.3027 | 0.8655 |
| 0.9995 | 3.0 | 954 | 0.6997 | 0.9116 |
| 0.5484 | 4.0 | 1272 | 0.4723 | 0.9374 |
| 0.364 | 5.0 | 1590 | 0.3997 | 0.9435 |
| 0.2855 | 6.0 | 1908 | 0.3724 | 0.9439 |
| 0.2475 | 7.0 | 2226 | 0.3573 | 0.9481 |
| 0.2267 | 8.0 | 2544 | 0.3517 | 0.9458 |
| 0.2173 | 9.0 | 2862 | 0.3480 | 0.9468 |
| 0.2112 | 10.0 | 3180 | 0.3445 | 0.9487 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 33 | null |
---
tags:
- Jamesbond-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Jamesbond-v5
type: Jamesbond-v5
metrics:
- type: mean_reward
value: 9370.00 +/- 3228.79
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 8 | null |
---
tags:
- BeamRider-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRider-v5
type: BeamRider-v5
metrics:
- type: mean_reward
value: 38260.20 +/- 18615.74
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **BeamRider-v5**
This is a trained model of a PPO agent playing BeamRider-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id BeamRider-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRider-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'BeamRider-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Qbert-v5
type: Qbert-v5
metrics:
- type: mean_reward
value: 19622.50 +/- 2069.74
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Qbert-v5**
This is a trained model of a PPO agent playing Qbert-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Qbert-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Qbert-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Qbert-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 28 | null |
---
tags:
- PrivateEye-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PrivateEye-v5
type: PrivateEye-v5
metrics:
- type: mean_reward
value: 100.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **PrivateEye-v5**
This is a trained model of a PPO agent playing PrivateEye-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id PrivateEye-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id PrivateEye-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'PrivateEye-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"bert",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 32 | null |
---
tags:
- WizardOfWor-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: WizardOfWor-v5
type: WizardOfWor-v5
metrics:
- type: mean_reward
value: 19910.00 +/- 6704.84
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **WizardOfWor-v5**
This is a trained model of a PPO agent playing WizardOfWor-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id WizardOfWor-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'WizardOfWor-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1
|
[
"pytorch",
"bert",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 10 | null |
---
tags:
- Surround-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Surround-v5
type: Surround-v5
metrics:
- type: mean_reward
value: 6.00 +/- 1.55
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Surround-v5**
This is a trained model of a PPO agent playing Surround-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Surround-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Surround-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Surround-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Surround-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa_copy
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
---
tags:
- StarGunner-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: StarGunner-v5
type: StarGunner-v5
metrics:
- type: mean_reward
value: 189480.00 +/- 21365.85
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **StarGunner-v5**
This is a trained model of a PPO agent playing StarGunner-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id StarGunner-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/StarGunner-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'StarGunner-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_roberta_hier_quadruplet_0.1_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- SpaceInvaders-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvaders-v5
type: SpaceInvaders-v5
metrics:
- type: mean_reward
value: 38654.50 +/- 19147.98
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5**
This is a trained model of a PPO agent playing SpaceInvaders-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id SpaceInvaders-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'SpaceInvaders-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1
|
[
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] |
feature-extraction
|
{
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 6 | null |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 287.40 +/- 11.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/rule_based_roberta_hier_triplet_0.1_epochs_1_shard_1_squad2.0
|
[
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] |
question-answering
|
{
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 2 | null |
オホッ❤顔になるやつ
インスタンスオ゙ッは♥ohogaoオ゙オゥ♥
1.1~1.4位に強度上げたほうがイ゙イ゙ッ♥♥かもしれなイ゙グッ♥
初めてLora作ったので微妙な出来です
多分そのうち作り直す
3月25日追記
Lo゙co゙n版ン゙ン゙ッ♥のv3追加しましたァ♥ プロンプト♥に「ohogao」「blush」「open mouth」「rolling eyes」イ゙ッ♥れてこねくり回すとオ゙ッ♥イイ゙ッ♥感じかもしれなイ゙クッ♥ 初めてLocon作ったので微妙な出来です AOM3以外と相性わるめ構図とかの改善点を思いついたのでそのうち作り直す

|
AnonymousSub/rule_based_roberta_twostage_quadruplet_epochs_1_shard_1_wikiqa
|
[
"pytorch",
"roberta",
"text-classification",
"transformers"
] |
text-classification
|
{
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
}
| 24 | null |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.94 +/- 0.28
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.