repo_id
stringlengths 4
122
| author
stringlengths 2
38
⌀ | model_type
stringlengths 2
33
⌀ | files_per_repo
int64 2
39k
| downloads_30d
int64 0
33.7M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.87k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
33
⌀ | languages
stringlengths 2
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringlengths 6
258
⌀ | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
46
| prs_closed
int64 0
34
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 2
classes | has_text
bool 1
class | text_length
int64 201
598k
| readme
stringlengths 0
598k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lora-library/simbatheoglion
|
lora-library
| null | 23 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 542 |
# LoRA DreamBooth - simbatheoglion
These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "a photo of simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
Test prompt: A photo of simbatheog in a bucket




|
Jin749/a2c-AntBulletEnv-v0
|
Jin749
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eldraco/tqc-PandaReachDense-v2
|
eldraco
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **TQC** Agent playing **PandaReachDense-v2**
This is a trained model of a **TQC** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sheldon297/distilbert-base-uncased_trivia-qa
|
sheldon297
|
distilbert
| 15 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 926 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
ybagoury/flan-t5-base-tldr_news
|
ybagoury
|
t5
| 12 | 25 |
transformers
| 0 |
summarization
| true | false | false | null |
['en']
|
['JulesBelveze/tldr_news']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tldr']
| false | true | true | 3,428 |
# flan-t5-base-tldr_news
A fine-tuned T5 model for text summarization and title generation on TLDR (Too Long; Didn't Read) news articles.
## Introduction
flan-t5-base-tldr_news is a deep learning model that has been fine-tuned on a dataset of TLDR news articles. The model is specifically designed to perform the tasks of text summarization and title generation.
The T5 architecture is a transformer-based neural network architecture that has been used to achieve state-of-the-art results on a variety of NLP tasks. By fine-tuning the T5 architecture on a dataset of TLDR news articles, we aim to create a model that is capable of generating concise and informative summaries and titles for news articles.
## Task
The main goal of this model is to perform two NLP tasks: text summarization and title generation. Text summarization involves generating a shortened version of a longer text that retains the most important information and ideas. Title generation, on the other hand, involves generating a headline or title for a given text that accurately and concisely captures the main theme or idea of the text.
## Architecture
flan-t5-base-tldr_news uses the T5 architecture, which has been shown to be effective for a variety of NLP tasks. The T5 architecture consists of an encoder and a decoder, which are trained to generate a summary or title given an input text.
## Model Size
The model has 247,577,856 parameters, which represents the number of tunable weights in the model. The size of the model can impact the speed and memory requirements during training and inference, as well as the performance of the model on specific tasks.
## Training Data
The model was fine-tuned on a dataset of TLDR news articles. This dataset was selected because it contains a large number of news articles that have been condensed into short summaries, making it a good choice for training a model for text summarization. The training data was preprocessed to perform all types of standard preprocessing steps, including tokenization, to prepare the data for input into the model.
## Evaluation Metrics
To evaluate the performance of the model on the tasks of text summarization and title generation, we used the ROUGE metric. ROUGE, or Recall-Oriented Understudy for Gisting Evaluation, measures the overlap between the generated text and the reference text, which in this case is the original news article or its summary. The ROUGE metric is commonly used in NLP evaluations and provides a good way to measure the quality of the generated summaries and titles.
The following table shows the ROUGE scores for the model on the test set, which provides a good indication of its overall performance on the text summarization and title generation tasks:
| Metric | Score |
| ------ | ------|
| Rouge1 | 45.04 |
| Rouge2 | 25.24 |
| RougeL | 41.89 |
| RougeIsum | 41.84 |
It's important to note that these scores are just a snapshot of the model's performance on a specific test set, and the performance of the model may vary depending on the input text, the quality of the training data, and the specific application for which the model is being used.
## How to use via API
```python
from transformers import pipeline
summarizer = pipeline(
'summarization',
'ybagoury/flan-t5-base-tldr_news',
)
raw_text = """ your text here... """
results = summarizer(raw_text)
print(results)
```
|
mrigendraagrawal/q-FrozenLake-v1-4x4-noSlippery
|
mrigendraagrawal
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 421 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mrigendraagrawal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hammuneer/my_awesome_billsum_model
|
hammuneer
|
t5
| 20 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,706 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4633
- Rouge1: 0.1168
- Rouge2: 0.0244
- Rougel: 0.0933
- Rougelsum: 0.0933
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 261 | 3.6283 | 0.0812 | 0.0153 | 0.0636 | 0.0637 | 19.0 |
| 4.0281 | 2.0 | 522 | 3.5141 | 0.1064 | 0.0206 | 0.0846 | 0.0845 | 19.0 |
| 4.0281 | 3.0 | 783 | 3.4741 | 0.1154 | 0.0242 | 0.092 | 0.092 | 19.0 |
| 3.7182 | 4.0 | 1044 | 3.4633 | 0.1168 | 0.0244 | 0.0933 | 0.0933 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
tomekkorbak/naughty_davinci
|
tomekkorbak
|
gpt2
| 361 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,321 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# naughty_davinci
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 2524
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 2969174016},
'generation': {'batch_size': 128,
'force_call_on': [503],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 4096,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [503],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9cdfa11a07b00726ddfdabb554de05b29d777db3'},
'num_additional_tokens': 2,
'path_or_name': 'kejian/grainy-pep8'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'naughty_davinci',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 10,
'num_tokens': 3300000000,
'output_dir': 'training_output',
'per_device_train_batch_size': 8,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 100,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 2969174016,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/2gnmbj7w
|
Svengali75/ProtogenX53Photorealism
|
Svengali75
| null | 3 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 4,907 |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
JamesEJarvis/ppo-Huggy
|
JamesEJarvis
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 823 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: JamesEJarvis/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jmcneves/ppo-Huggy
|
jmcneves
| null | 32 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 819 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: jmcneves/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tomekkorbak/silly_nobel
|
tomekkorbak
|
gpt2
| 361 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,314 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# silly_nobel
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 2524
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.1,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 2969174016},
'generation': {'batch_size': 128,
'force_call_on': [503],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'bad_words_ids': [[32769]],
'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 4096,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [503],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>',
'should_insert_prefix': True},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9cdfa11a07b00726ddfdabb554de05b29d777db3'},
'num_additional_tokens': 2,
'path_or_name': 'kejian/grainy-pep8'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'silly_nobel',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 10,
'num_tokens': 3300000000,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 100,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 2969174016,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/24pv07g1
|
mrigendraagrawal/taxi-RL
|
mrigendraagrawal
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 386 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mrigendraagrawal/taxi-RL", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dogar/microsoft-wavlm-fleurs-ur
|
Dogar
|
wavlm
| 9 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false | null | null |
['fleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,836 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-wavlm-fleurs-ur
This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7294
- Wer: 0.4026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.911 | 0.35 | 100 | 3.7784 | 1.0 |
| 3.0833 | 0.71 | 200 | 3.0964 | 1.0 |
| 3.028 | 1.06 | 300 | 3.0377 | 1.0 |
| 2.5114 | 1.41 | 400 | 2.4941 | 0.9922 |
| 1.0583 | 1.77 | 500 | 1.0753 | 0.7579 |
| 0.715 | 2.12 | 600 | 0.8524 | 0.6410 |
| 0.6779 | 2.47 | 700 | 0.7711 | 0.6063 |
| 0.6123 | 2.83 | 800 | 0.7170 | 0.5706 |
| 0.8183 | 3.18 | 900 | 0.6897 | 0.5368 |
| 0.5195 | 3.53 | 1000 | 0.6586 | 0.5303 |
| 0.4774 | 3.89 | 1100 | 0.6306 | 0.5014 |
| 0.4242 | 4.24 | 1200 | 0.6138 | 0.4817 |
| 0.4549 | 4.59 | 1300 | 0.6027 | 0.4678 |
| 0.2576 | 4.95 | 1400 | 0.5878 | 0.4600 |
| 0.1578 | 5.3 | 1500 | 0.6144 | 0.4585 |
| 0.3556 | 5.65 | 1600 | 0.5884 | 0.4582 |
| 0.2427 | 6.01 | 1700 | 0.6071 | 0.4572 |
| 0.267 | 6.36 | 1800 | 0.6303 | 0.4514 |
| 0.2468 | 6.71 | 1900 | 0.6358 | 0.4495 |
| 0.159 | 7.07 | 2000 | 0.6242 | 0.4312 |
| 0.1527 | 7.42 | 2100 | 0.6372 | 0.4400 |
| 0.1401 | 7.77 | 2200 | 0.6252 | 0.4292 |
| 0.1211 | 8.13 | 2300 | 0.6358 | 0.4251 |
| 0.1022 | 8.48 | 2400 | 0.6529 | 0.4356 |
| 0.0818 | 8.83 | 2500 | 0.6773 | 0.4200 |
| 0.0918 | 9.19 | 2600 | 0.6879 | 0.4267 |
| 0.119 | 9.54 | 2700 | 0.6948 | 0.4254 |
| 0.1615 | 9.89 | 2800 | 0.6920 | 0.4259 |
| 0.0953 | 10.25 | 2900 | 0.7019 | 0.4218 |
| 0.1008 | 10.6 | 3000 | 0.6933 | 0.4133 |
| 0.0729 | 10.95 | 3100 | 0.6950 | 0.4164 |
| 0.0636 | 11.31 | 3200 | 0.7151 | 0.4121 |
| 0.0395 | 11.66 | 3300 | 0.7053 | 0.4098 |
| 0.0391 | 12.01 | 3400 | 0.7081 | 0.3984 |
| 0.0507 | 12.37 | 3500 | 0.7012 | 0.4111 |
| 0.0598 | 12.72 | 3600 | 0.7169 | 0.4035 |
| 0.0515 | 13.07 | 3700 | 0.7358 | 0.4102 |
| 0.0429 | 13.43 | 3800 | 0.7236 | 0.4013 |
| 0.0398 | 13.78 | 3900 | 0.7404 | 0.4026 |
| 0.0946 | 14.13 | 4000 | 0.7285 | 0.4029 |
| 0.0428 | 14.49 | 4100 | 0.7271 | 0.3991 |
| 0.0329 | 14.84 | 4200 | 0.7294 | 0.4026 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
apatidar0/distilbert-base-uncased-finetuned-imdb
|
apatidar0
|
distilbert
| 13 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,454 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
IMDB dataset for getting intuition on how to train an MLM model
## Training procedure
You need to create the dataset in the exact format in which the model was trained by the author.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
seongwoon/distilbert-base-uncased-finetuned-labor_space_v3-finetuned-labor_space_v4
|
seongwoon
|
distilbert
| 8 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,041 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-labor_space_v3-finetuned-labor_space_v4
This model is a fine-tuned version of [seongwoon/distilbert-base-uncased-finetuned-labor_space_v3](https://huggingface.co/seongwoon/distilbert-base-uncased-finetuned-labor_space_v3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Tokenizers 0.13.2
|
AinhoaC/clasificador-muchocine
|
AinhoaC
|
electra
| 10 | 0 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,367 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4463
- Accuracy: 0.4503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3448 | 0.3871 |
| 1.3815 | 2.0 | 776 | 1.3046 | 0.4284 |
| 1.0077 | 3.0 | 1164 | 1.4463 | 0.4503 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQN_tt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,747 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_tt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQN_tt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQN_tt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_tt0.1-seed1/raw/main/dqn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_tt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQN_tt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari.py --exp-name DQN_tt0.1 --tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'env_id': 'Pong-v4',
'exp_name': 'DQN_tt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'tau': 0.1,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
pfunk/Pong-v4-DQPN_p1_pt0.1_tt0.1-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,026 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p1_pt0.1_tt0.1.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p1_pt0.1_tt0.1]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p1_pt0.1_tt0.1 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p1_pt0.1_tt0.1-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p1_pt0.1_tt0.1-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p1_pt0.1_tt0.1-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p1_pt0.1_tt0.1 --start-policy-f 1000 --end-policy-f 1000 --evaluation-fraction 1.00 --target-tau 0.1 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 1.0,
'exp_name': 'DQPN_p1_pt0.1_tt0.1',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 0.1,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 1000,
'target_network_frequency': 1000,
'target_tau': 0.1,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
PeerNorback/FrozenLake
|
PeerNorback
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 380 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PeerNorback/FrozenLake", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
giggling-squid/ppo-LunarLander-v2
|
giggling-squid
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PeerNorback/Taxi
|
PeerNorback
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 362 |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PeerNorback/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dn-gh/a2c-AntBulletEnv-v0
|
dn-gh
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jayxu/sd-class-butterflies-32
|
jayxu
| null | 6 | 2 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 362 |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jayxu/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
moshew/gpt_medium_emotion
|
moshew
|
gpt2
| 10 | 24 |
transformers
| 0 |
text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,388 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gpt_medium_emotion
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8023
- Validation Loss: 1.4614
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.3542 | 1.2651 | 0 |
| 1.0773 | 1.3099 | 1 |
| 0.8023 | 1.4614 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
KoichiYasuoka/deberta-large-japanese-juman-ud-goeswith
|
KoichiYasuoka
|
deberta-v2
| 11 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['ja']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'wikipedia', 'cc100', 'oscar', 'pos', 'dependency-parsing']
| false | true | true | 634 |
# deberta-large-japanese-juman-ud-goeswith
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia, CC-100, and OSCAR texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [deberta-v2-large-japanese](https://huggingface.co/ku-nlp/deberta-v2-large-japanese).
## How to Use
```
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/deberta-large-japanese-juman-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
[fugashi](https://pypi.org/project/fugashi) is required.
|
adsjklfsd/xlm-roberta-base-finetuned-panx-de
|
adsjklfsd
|
xlm-roberta
| 12 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
- F1: 0.8617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2564 | 1.0 | 525 | 0.1610 | 0.8285 |
| 0.1307 | 2.0 | 1050 | 0.1378 | 0.8491 |
| 0.0813 | 3.0 | 1575 | 0.1344 | 0.8617 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.10.1+cu113
- Datasets 2.9.0
- Tokenizers 0.13.2
|
eldraco/poca-SoccerTwos
|
eldraco
| null | 17 | 0 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 841 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: eldraco/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ai-moroz/lazy-ti
|
ai-moroz
| null | 13 | 0 | null | 1 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['anime', 'textual-inversion', 'embeddings']
| false | true | true | 4,239 |
### Lazy TI dump: Textual inversion embeddings host. Embeddings trained using `stable-textual-inversion-cafe Colab - Lazy Edition` with few images and highly probably only bring pain and headache. <u>Use at own risk. good luck.</u>
- Tsukihime
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/ciel-tm.pt">Ciel</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/ciel.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/ciel.png" width="200"></a>
<pre><b>ciel-tm</b>, cross necklace, upper body, palms together, own hands together, looking up, from above, looking at viewer, pixie cut, sitting, moon
Negative prompt: (worst quality, low quality:1.4), bad anatomy, weapon, blush, messy hair
Steps: 30, Sampler: DPM++ SDE, CFG scale: 7, Seed: 2808656065, Size: 512x768, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.4, Hires upscaler: Latent</pre>
- Genshin Impact
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/dump/diona-gi.pt">Diona</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/diona.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/diona.png" width="200"></a>
<pre><b>diona-gi</b>, :3, solo, (standing), nature, birds, floating leaves, perfect fingers, bright, noon,(masterpiece:1.2), best quality, highres, original, perfect lighting, (extremely detailed CG:1.2),(8k:1.1)
Negative prompt: (worst quality, low quality:1.4), bad anatomy, blur
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3253826251, Size: 512x768, Model hash: 791d67d4, Denoising strength: 0.6, Clip skip: 2, First pass size: 448x640</pre>
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/xiao-gi.pt">Xiao</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/xiao.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/xiao.png" width="200"></a>
<pre><b>xiao-gi</b>, 1boy, standing, crossed arm, mature, detailed eyes, short hair, white shirt, top, white top, beads, bead necklace, jewelry, ornament, green hair, forehead mark, diamond, ahoge, short hair, arm tattoo, full covered, tassel, spike, standing, shoulder pad, capri pants, black pants, hakama, cowboy shot, (masterpiece:1,2), best quality, highres, perfect lighting, (8k:1.1), dynamic angle
Negative prompt:(worst quality, low quality:1.4), bad anatomy, text, username, watermark, nude, abs
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 10, Seed: 3146507317, Size: 512x768, Model hash: 791d67d4, Denoising strength: 0.6, Clip skip: 2, First pass size: 0x0</pre>
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/thoma-gi.pt">Thoma</a>
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/thoma.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/thoma.png" width="200"></a>
<pre><b>thoma-gi</b>, 1boy, solo, blonde, green eyes, low ponytail, military tags, red jacket, crop jacket, black shirt, gloves, tassel, black pants, ((masterpiece)), best quality, highres, vivid, bright
Negative prompt: (worst quality, low quality:1.4), bad anatomy, nsfw, turtleneck, backlight
Steps: 20, Sampler: DPM++ SDE, CFG scale: 6, Seed: 368616108, Size: 512x768, Model hash: f75b19923f, Model: AbyssOrangeMix2_sfw, Clip skip: 2</pre>
- Naruto
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/naruko-nrt.pt">Naruko</a> /face only
<a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/naruko.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/naruko.png" width="200"></a>
<pre><b>naruko-nrt</b>, solo, (orange jacket:1.4), black pants, unzipping, best quality, masterpiece, wood, dynamic angle, contrapposto, indoor, balcony, vivid, leaves,
Negative prompt: (worst quality, low quality:1.4), bad anatomy, extra fingers
Steps: 20, Sampler: Euler a, CFG scale: 8, Seed: 1902740231, Size: 512x768, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Denoising strength: 0.7, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent</pre>
- Misc
1. <a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/dump/grinteeth.pt">grinteeth</a> /mouth only
<div style='float: right;'><a href="https://huggingface.co/ai-moroz/lazy-ti/blob/main/prev/grinteeth.png"><img src="https://huggingface.co/ai-moroz/lazy-ti/resolve/main/prev/grinteeth.png" width="200"></a></div>
<pre><b>(grinteeth:0.8)</b>, smile, lips, close-up, solo, blue hair, black eyes, master piece, best quality
Negative prompt: worst quality, low quality, ugly, nsfw, blush
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2885651626, Size: 512x512, Model hash: 6e430eb514, Model: anything-v4.5-pruned, Clip skip: 2</pre>
### uses
- Download and place the file in 'embeddings' folder
- Use the filename in prompt
#### preview models
- WarriorMama777/AbyssOrangeMix2
- andite/anything-v4.5
|
messham/ppo-Huggy
|
messham
| null | 12 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 818 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: messham/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Akriel/MLP-Lunar-Lander
|
Akriel
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eshanck/apm1
|
eshanck
|
distilbert
| 10 | 34 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,451 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# apm1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 157 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 314 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 471 | 0.0004 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.044 | 4.0 | 628 | 0.0003 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.044 | 5.0 | 785 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.044 | 6.0 | 942 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0005 | 7.0 | 1099 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0005 | 8.0 | 1256 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0005 | 9.0 | 1413 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 10.0 | 1570 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 11.0 | 1727 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 12.0 | 1884 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 13.0 | 2041 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 14.0 | 2198 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 15.0 | 2355 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 16.0 | 2512 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 17.0 | 2669 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 18.0 | 2826 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 19.0 | 2983 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 20.0 | 3140 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 21.0 | 3297 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 22.0 | 3454 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 23.0 | 3611 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 24.0 | 3768 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 25.0 | 3925 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
dn-gh/a2c-PandaReachDense-v2
|
dn-gh
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
apatidar0/t5-small-finetuned-amazon-en
|
apatidar0
|
t5
| 14 | 3 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,172 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-amazon-en
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.1622
- eval_rouge1: 14.7056
- eval_rouge2: 6.5373
- eval_rougeL: 13.8753
- eval_rougeLsum: 13.9924
- eval_runtime: 3.8484
- eval_samples_per_second: 35.08
- eval_steps_per_second: 4.417
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Amiko/DEEP_RL
|
Amiko
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
speechcatcher/speechcatcher_german_espnet_streaming_transformer_13k_train_size_m_raw_de_bpe1024
|
speechcatcher
| null | 22 | 32 |
espnet
| 1 |
automatic-speech-recognition
| false | false | false |
mit
|
['de']
|
['speechcatcher']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 13,668 |
## ESPnet2 ASR model
### `speechcatcher/speechcatcher_german_espnet_streaming_transformer_13k_train_size_m_raw_de_bpe1024`
This model was trained by bmilde using speechcatcher recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout df10e664a3e1a3cbbe8363b1d93e94ad5d8b147f
pip install -e .
cd egs2/speechcatcher/asr1
./run.sh --skip_data_prep false --skip_train true --download_model speechcatcher/speechcatcher_german_espnet_streaming_transformer_13k_train_size_m_raw_de_bpe1024
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Feb 5 11:50:19 UTC 2023`
- python version: `3.10.8 (main, Nov 4 2022, 13:48:29) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.12.1+cu116`
- Git hash: `df10e664a3e1a3cbbe8363b1d93e94ad5d8b147f`
- Commit date: `Fri Feb 3 13:38:18 2023 +0000`
## asr_train_asr_streaming_transformer_size_m_raw_de_bpe1024
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_asr_model_valid.acc.ave/test|2497|260537|65.9|25.8|8.3|5.3|39.4|99.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_asr_model_valid.acc.ave/test|2497|1569438|84.9|6.0|9.1|5.4|20.5|99.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_asr_model_valid.acc.ave/test|2497|512776|71.8|18.0|10.2|5.7|33.9|99.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_streaming_transformer_size_m.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_transformer_size_m_raw_de_bpe1024
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55055
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: 3
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- acc
- max
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 128
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_de_bpe1024/train/speech_shape
- exp/asr_stats_raw_de_bpe1024/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_de_bpe1024/valid/speech_shape
- exp/asr_stats_raw_de_bpe1024/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ','
- .
- t
- ▁
- e
- en
- s
- n
- ▁ich
- ▁das
- ▁und
- ▁die
- er
- ▁ist
- ▁auch
- ▁so
- st
- ▁der
- ▁nicht
- ▁es
- ▁ein
- r
- ▁in
- f
- ▁dann
- ▁ja
- d
- ▁da
- g
- h
- m
- o
- u
- b
- ▁wir
- ▁zu
- ▁du
- ▁ge
- ▁Und
- i
- a
- ▁mit
- ▁den
- in
- ▁man
- l
- ▁auf
- ▁dass
- sch
- ▁jetzt
- '?'
- ge
- ▁was
- ▁er
- ▁Ja
- ▁hat
- '-'
- p
- ▁war
- ▁eine
- ▁F
- ▁aber
- ▁mal
- ▁oder
- y
- ▁noch
- te
- ung
- ▁haben
- ▁Ich
- ▁be
- ▁Das
- ▁wie
- ä
- ▁an
- ▁habe
- k
- ▁von
- ▁sich
- ▁K
- al
- ▁wenn
- la
- ▁schon
- ig
- ra
- lich
- re
- de
- ch
- ▁für
- it
- ▁Also
- w
- ▁A
- es
- ▁sind
- ▁ver
- le
- or
- ▁sie
- ▁B
- ü
- ▁also
- ▁ganz
- ▁T
- ▁im
- ▁dem
- ter
- an
- ck
- ▁St
- ▁aus
- ▁G
- ▁kann
- ▁bei
- ▁halt
- ▁H
- el
- ▁immer
- z
- ▁einfach
- ▁P
- ö
- ▁S
- ▁weil
- ▁mir
- se
- ▁f
- ut
- ten
- ▁wo
- ▁Sch
- us
- ▁vor
- ur
- ▁sehr
- ri
- kt
- ing
- ▁E
- il
- ▁gut
- ▁mich
- ▁Aber
- 'on'
- und
- cht
- ▁als
- den
- ar
- ie
- um
- ▁uns
- ste
- ▁Da
- hr
- ▁über
- be
- ▁einen
- ▁Be
- ▁ihr
- is
- ▁wieder
- ▁glaube
- ▁Ge
- at
- ▁irgendwie
- li
- ▁nur
- we
- ro
- ▁bisschen
- he
- ▁mehr
- ▁M
- tz
- ▁muss
- gen
- ▁sagen
- ben
- ▁wirklich
- ▁alle
- nd
- ▁wird
- ▁gibt
- ▁um
- ▁m
- ▁natürlich
- ▁viel
- me
- nt
- et
- ▁diese
- ▁U
- '0'
- ▁sein
- ▁nach
- ▁hier
- ▁meine
- ern
- lo
- ion
- ▁eigentlich
- ▁O
- ▁machen
- ▁bin
- ▁So
- ll
- ▁hast
- ▁weiß
- ▁Re
- c
- ▁I
- ▁sch
- ▁C
- ▁vielleicht
- iert
- ach
- ▁b
- ne
- x
- ze
- rei
- ru
- ma
- ▁zum
- ▁finde
- ß
- ▁N
- ▁Die
- rt
- ich
- ▁Ma
- uch
- ▁eben
- rü
- ▁Ver
- ein
- ▁In
- R
- ieren
- ▁Ha
- ssen
- ft
- chen
- am
- di
- der
- hl
- ▁Es
- ▁gesagt
- zu
- ▁ne
- ▁An
- ▁k
- ▁1
- ▁am
- hn
- ▁gerade
- pp
- her
- ▁alles
- nen
- ▁geht
- ▁genau
- ha
- ▁Jahr
- ▁re
- ▁werden
- ▁w
- ▁Z
- isch
- ▁p
- ▁Er
- ke
- ▁Wir
- au
- mm
- ik
- ▁mein
- ▁dir
- ▁einem
- un
- ▁würde
- ▁We
- ▁zwei
- v
- ▁doch
- ▁keine
- ▁erst
- na
- and
- ▁gar
- ▁hin
- ▁durch
- ▁V
- kommen
- ell
- ul
- end
- ▁können
- j
- fe
- ▁richtig
- ff
- ▁Me
- ▁andere
- lie
- '...'
- wi
- ol
- art
- ▁Leute
- ▁Zeit
- ▁Ein
- ran
- ner
- ▁ab
- nk
- ation
- ▁viele
- ▁g
- S
- rie
- ▁ob
- im
- ver
- ür
- rk
- ▁einer
- men
- ▁ent
- iv
- lei
- ▁gemacht
- sp
- ▁hatte
- ▁weiter
- sten
- che
- ang
- all
- ir
- hör
- ▁Was
- aus
- ier
- ▁Ne
- ▁Li
- ▁hab
- ass
- L
- igen
- zi
- ungen
- ▁Spiel
- ▁will
- ▁unter
- ag
- ▁macht
- ber
- ▁Sp
- zen
- ▁denn
- ken
- ▁des
- ▁Ka
- lle
- id
- sen
- ▁dich
- ▁st
- ▁Du
- ▁kommt
- spiel
- ▁Fall
- ▁Man
- ▁Se
- ▁W
- ▁dieser
- ▁Ko
- ga
- ▁De
- ▁groß
- ▁Le
- ▁schön
- ▁La
- ▁jeden
- ▁D
- ▁Genau
- gt
- ▁dieses
- ungs
- ▁J
- pro
- ▁Co
- ▁Beispiel
- ▁heißt
- ▁s
- ist
- rä
- ho
- ▁damit
- ▁Wo
- ▁unsere
- ▁le
- ert
- '5'
- ni
- tt
- gel
- ▁her
- ve
- ▁sondern
- mp
- reich
- ▁Sa
- ''''
- ▁lang
- ▁rein
- ▁neu
- ▁sagt
- ▁tatsächlich
- ▁kein
- är
- nehmen
- ▁bis
- elt
- ad
- teil
- ▁euch
- ta
- ▁a
- ▁anderen
- ▁raus
- op
- ▁Der
- ige
- arbeit
- ▁Film
- ▁Ba
- ▁heute
- ▁wäre
- ▁nochmal
- ▁ange
- ▁Sie
- ick
- ▁of
- ler
- ▁un
- ische
- weise
- lä
- kl
- ▁Na
- iß
- wa
- ▁wer
- ▁Ding
- ▁okay
- ▁Ra
- halt
- ▁we
- ▁Pa
- ▁Thema
- heit
- ▁ko
- ▁Dann
- ▁diesen
- schaft
- ▁möchte
- ▁hätte
- lu
- ▁Al
- bar
- ▁Tag
- mo
- ▁Wie
- ▁waren
- ▁sp
- ▁wurde
- ▁Auf
- ce
- ▁Frage
- ▁kannst
- wo
- ▁Mi
- ▁deine
- ▁To
- mi
- ▁dazu
- äng
- ▁bist
- ischen
- ▁Mo
- ▁ihn
- 'no'
- zieh
- ▁Ab
- ▁kommen
- ▁Menschen
- anz
- ▁Wenn
- ▁ha
- ▁Vor
- ▁Ro
- stell
- ▁Zu
- ▁je
- rau
- eln
- ab
- hin
- ka
- schau
- ▁Pro
- ger
- P
- ▁Bo
- ▁gerne
- ko
- nis
- ▁drei
- ▁gleich
- ld
- ▁klar
- ack
- ▁Aus
- ün
- ▁nie
- A
- ▁tr
- ▁seine
- ▁Mit
- geben
- ▁soll
- '4'
- ▁diesem
- lau
- ▁müssen
- ▁kleine
- ▁kurz
- mmer
- ment
- stellen
- ▁Wa
- ▁Podcast
- ▁Wi
- ▁the
- ▁Woche
- ▁guck
- ▁quasi
- ▁Ho
- mal
- ▁sei
- ▁Po
- krieg
- aff
- ▁nächste
- itz
- ▁20
- tag
- '9'
- ▁Ende
- richt
- uck
- ör
- ▁2
- dem
- mpf
- vi
- 'off'
- ▁Leben
- ▁wichtig
- ▁gesehen
- ▁gehen
- ress
- ▁sag
- M
- ▁echt
- ▁etwas
- stand
- zähl
- führ
- T
- ▁wenig
- ▁zusammen
- ▁paar
- ▁Di
- ▁einmal
- bo
- ▁sehen
- ▁Sachen
- ▁Kon
- bi
- ▁dabei
- gend
- pass
- ic
- ▁könnte
- ▁Weil
- zeit
- ▁denke
- F
- ▁Folge
- man
- ▁wollte
- kauf
- ▁weg
- ▁3
- ▁selbst
- '1'
- hol
- co
- ▁wollen
- bau
- '2'
- B
- ▁wahrscheinlich
- ank
- ▁Mal
- ▁letzten
- fahren
- ▁vom
- ▁Do
- hi
- ▁eher
- D
- ▁selber
- ord
- ▁super
- ▁musst
- ▁drauf
- ▁jemand
- '8'
- ▁gegen
- ▁überhaupt
- ▁The
- ▁Okay
- ▁beim
- ▁sage
- pa
- ▁dafür
- vor
- ▁Frau
- ▁hatten
- ▁drin
- '6'
- ▁sozusagen
- iz
- ▁fand
- ▁Tra
- folg
- ▁Nach
- ▁tun
- ▁dein
- ität
- C
- ▁Oder
- ▁zurück
- ▁Nein
- po
- ▁cool
- ▁sowas
- ▁sieht
- gehen
- schi
- ▁Gott
- ▁schnell
- form
- ▁ihm
- ▁besser
- ▁gab
- wä
- ▁äh
- ▁Kinder
- änder
- ▁sollte
- ▁Jo
- ▁voll
- ▁War
- ▁kenne
- ▁zwar
- ▁total
- ▁welche
- ▁passiert
- ▁Hand
- fall
- ▁irgendwann
- ▁Problem
- war
- qu
- fühl
- ▁Wer
- ▁wissen
- ▁dort
- ▁jeder
- ca
- ▁deswegen
- sprech
- ▁davon
- ▁damals
- trag
- ▁nämlich
- ▁Punkt
- ▁Welt
- ▁abge
- '7'
- log
- ▁sogar
- ▁kam
- legen
- ▁Moment
- igkeit
- ▁konnte
- ▁komm
- ▁gewesen
- ▁anders
- ▁Bi
- K
- ▁eigene
- ▁liebe
- ▁Teil
- ▁Lo
- ▁toll
- ▁Arbeit
- ▁Seite
- genommen
- ▁to
- ▁alt
- ▁trotzdem
- ▁gehört
- ▁Jetzt
- ▁mache
- ▁Dr
- ▁relativ
- sicht
- ▁steht
- ▁Auto
- ▁darüber
- nehm
- ▁irgendwas
- ▁ohne
- ▁Geld
- ▁Euro
- ieß
- suche
- ▁vier
- einander
- ▁Grund
- ▁Gefühl
- gestellt
- ▁sa
- ativ
- G
- ▁darauf
- I
- ▁All
- ▁Anfang
- ▁darf
- ▁Freund
- ▁direkt
- ▁irgendwo
- ▁letzte
- ▁schlecht
- ▁manchmal
- ▁Bild
- ▁Geschichte
- ▁interessant
- E
- ▁komplett
- ▁Ahnung
- bringen
- nutz
- bild
- ▁frag
- V
- ▁Kind
- ▁meisten
- ▁gehabt
- ▁gedacht
- ▁erstmal
- ▁fast
- ▁stimmt
- '3'
- laufen
- ▁bestimmt
- zahl
- ▁Über
- kommt
- gegangen
- setzen
- ▁funktioniert
- ▁spielen
- ▁Person
- ▁Sinn
- ▁dachte
- ▁fünf
- ▁hoch
- bereit
- ▁brauche
- ▁zwischen
- ▁Spaß
- ▁spannend
- ▁ehrlich
- ▁krass
- ▁schreib
- ▁zumindest
- zeug
- ▁Musik
- W
- fahr
- ▁solche
- ▁Deutschland
- ▁gespielt
- geschrieben
- Ä
- ▁später
- Y
- O
- H
- '!'
- U
- N
- Q
- Ö
- X
- Z
- J
- '%'
- Ü
- é
- «
- »
- '&'
- Ã
- à
- ş
- q
- ¤
- Ÿ
- €
- è
- ı
- ç
- ú
- ë
- ¶
- á
- ć
- —
- õ
- ğ
- í
- °
- ô
- _
- ó
- /
- å
- $
- ́
- û
- ›
- ê
- ‹
- '"'
- ñ
- Ş
- č
- )
- É
- μ
- ø
- š
- о
- ł
- ù
- ã
- ā
- ©
- а
- ':'
- е
- œ
- и
- н
- â
- î
- т
- ń
- р
- к
- 你
- æ
- „
- Č
- с
- ♪
- д
- Š
- в
- ï
- İ
- л
- À
- у
- ь
- я
- м
- ę
- ś
- ž
- п
- '='
- ō
- ř
- Æ
- ш
- з
- ы
- ū
- ș
- Ø
- '~'
- ì
- ò
- ο
- ч
- г
- ý
- ̄
- ц
- Х
- ż
- З
- б
- ¡
- Н
- ă
- ̃
- К
- ж
- ไ
- ồ
- ♫
- ر
- х
- ン
- Ç
- §
- ⁄
- +
- '*'
- Å
- і
- Á
- ī
- џ
- ู
- ;
- '>'
- Î
- ą
- Đ
- Ȗ
- Ε
- έ
- δ
- ι
- λ
- ς
- τ
- υ
- ύ
- О
- Т
- و
- ک
- ں
- ด
- ม
- ่
- ṣ
- “
- ♥
- き
- つ
- ぶ
- ら
- チ
- ッ
- ホ
- ロ
- 中
- 以
- 佢
- 利
- 厲
- 句
- 可
- 吃
- 国
- 士
- 好
- 安
- 害
- 度
- 手
- 晃
- 法
- Ć
- ě
- Б
- ج
- 救
- ά
- –
- ダ
- 制
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/de_token_list/bpe_unigram1024/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_de_bpe1024/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: contextual_block_transformer
encoder_conf:
output_size: 256
attention_heads: 8
linear_units: 2048
num_blocks: 14
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 8
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
hr1588/xlm-roberta-base-finetuned-panx-de
|
hr1588
|
xlm-roberta
| 12 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- F1: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2495 | 1.0 | 787 | 0.1764 | 0.8184 |
| 0.1299 | 2.0 | 1574 | 0.1427 | 0.8562 |
| 0.0771 | 3.0 | 2361 | 0.1405 | 0.8655 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
relbert/relbert-roberta-large-nce-a-nell
|
relbert
|
roberta
| 30 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false | null | null |
['relbert/nell_relational_similarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 4,786 |
# relbert/relbert-roberta-large-nce-a-nell
RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-a-nell/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.4411764705882353
- Accuracy on SAT: 0.45103857566765576
- Accuracy on BATS: 0.462479155086159
- Accuracy on U2: 0.37280701754385964
- Accuracy on U4: 0.43287037037037035
- Accuracy on Google: 0.75
- Accuracy on ConceptNet Analogy: 0.16526845637583892
- Accuracy on T-Rex Analogy: 0.73224043715847
- Accuracy on NELL-ONE Analogy: 0.8416666666666667
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-a-nell/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9028175380442971
- Micro F1 score on CogALexV: 0.8288732394366197
- Micro F1 score on EVALution: 0.6289274106175514
- Micro F1 score on K&H+N: 0.9608402309243931
- Micro F1 score on ROOT09: 0.8843622688812285
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-a-nell/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8019047619047619
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-nce-a-nell")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-large
- max_length: 64
- epoch: 10
- batch: 31
- random_seed: 0
- lr: 5e-06
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/nell_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: nce
- classification_loss: False
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10}
- augment_negative_by_positive: True
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-a-nell/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
|
bds2714/jukebox
|
bds2714
| null | 332 | 0 | null | 0 | null | false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | false | true | 17,714 |
**Status:** Archive (code is provided as-is, no updates expected)
# Jukebox
Code for "Jukebox: A Generative Model for Music"
[Paper](https://arxiv.org/abs/2005.00341)
[Blog](https://openai.com/blog/jukebox)
[Explorer](http://jukebox.openai.com/)
[Colab](https://colab.research.google.com/github/openai/jukebox/blob/master/jukebox/Interacting_with_Jukebox.ipynb)
# Install
Install the conda package manager from https://docs.conda.io/en/latest/miniconda.html
```
# Required: Sampling
conda create --name jukebox python=3.7.5
conda activate jukebox
conda install mpi4py=3.0.3 # if this fails, try: pip install mpi4py==3.0.3
conda install pytorch=1.4 torchvision=0.5 cudatoolkit=10.0 -c pytorch
git clone https://github.com/openai/jukebox.git
cd jukebox
pip install -r requirements.txt
pip install -e .
# Required: Training
conda install av=7.0.01 -c conda-forge
pip install ./tensorboardX
# Optional: Apex for faster training with fused_adam
conda install pytorch=1.1 torchvision=0.3 cudatoolkit=10.0 -c pytorch
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
```
# Sampling
## Sampling from scratch
To sample normally, run the following command. Model can be `5b`, `5b_lyrics`, `1b_lyrics`
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
```
python jukebox/sample.py --model=1b_lyrics --name=sample_1b --levels=3 --sample_length_in_seconds=20 \
--total_sample_length_in_seconds=180 --sr=44100 --n_samples=16 --hop_fraction=0.5,0.5,0.125
```
The above generates the first `sample_length_in_seconds` seconds of audio from a song of total length `total_sample_length_in_seconds`.
To use multiple GPU's, launch the above scripts as `mpiexec -n {ngpus} python jukebox/sample.py ...` so they use `{ngpus}`
The samples decoded from each level are stored in `{name}/level_{level}`.
You can also view the samples as an html with the aligned lyrics under `{name}/level_{level}/index.html`.
Run `python -m http.server` and open the html through the server to see the lyrics animate as the song plays.
A summary of all sampling data including zs, x, labels and sampling_kwargs is stored in `{name}/level_{level}/data.pth.tar`.
The hps are for a V100 GPU with 16 GB GPU memory. The `1b_lyrics`, `5b`, and `5b_lyrics` top-level priors take up
3.8 GB, 10.3 GB, and 11.5 GB, respectively. The peak memory usage to store transformer key, value cache is about 400 MB
for `1b_lyrics` and 1 GB for `5b_lyrics` per sample. If you are having trouble with CUDA OOM issues, try `1b_lyrics` or
decrease `max_batch_size` in sample.py, and `--n_samples` in the script call.
On a V100, it takes about 3 hrs to fully sample 20 seconds of music. Since this is a long time, it is recommended to use `n_samples > 1` so you can generate as many samples as possible in parallel. The 1B lyrics and upsamplers can process 16 samples at a time, while 5B can fit only up to 3. Since the vast majority of time is spent on upsampling, we recommend using a multiple of 3 less than 16 like `--n_samples 15` for `5b_lyrics`. This will make the top-level generate samples in groups of three while upsampling is done in one pass.
To continue sampling from already generated codes for a longer duration, you can run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=continue \
--codes_file=sample_5b/level_0/data.pth.tar --sample_length_in_seconds=40 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
Here, we take the 20 seconds samples saved from the first sampling run at `sample_5b/level_0/data.pth.tar` and continue by adding 20 more seconds.
You could also continue directly from the level 2 saved outputs, just pass `--codes_file=sample_5b/level_2/data.pth.tar`.
Note this will upsample the full 40 seconds song at the end.
If you stopped sampling at only the first level and want to upsample the saved codes, you can run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --mode=upsample \
--codes_file=sample_5b/level_2/data.pth.tar --sample_length_in_seconds=20 --total_sample_length_in_seconds=180 \
--sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
Here, we take the 20 seconds samples saved from the first sampling run at `sample_5b/level_2/data.pth.tar` and upsample the lower two levels.
## Prompt with your own music
If you want to prompt the model with your own creative piece or any other music, first save them as wave files and run
```
python jukebox/sample.py --model=5b_lyrics --name=sample_5b_prompted --levels=3 --mode=primed \
--audio_file=path/to/recording.wav,awesome-mix.wav,fav-song.wav,etc.wav --prompt_length_in_seconds=12 \
--sample_length_in_seconds=20 --total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
```
This will load the four files, tile them to fill up to `n_samples` batch size, and prime the model with the first `prompt_length_in_seconds` seconds.
# Training
## VQVAE
To train a small vqvae, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae --name=small_vqvae --sample_length=262144 --bs=4 \
--audio_files_dir={audio_files_dir} --labels=False --train --aug_shift --aug_blend
```
Here, `{audio_files_dir}` is the directory in which you can put the audio files for your dataset, and `{ngpus}` is number of GPU's you want to use to train.
The above trains a two-level VQ-VAE with `downs_t = (5,3)`, and `strides_t = (2, 2)` meaning we downsample the audio by `2**5 = 32` to get the first level of codes, and `2**8 = 256` to get the second level codes.
Checkpoints are stored in the `logs` folder. You can monitor the training by running Tensorboard
```
tensorboard --logdir logs
```
## Prior
### Train prior or upsamplers
Once the VQ-VAE is trained, we can restore it from its saved checkpoint and train priors on the learnt codes.
To train the top-level prior, we can run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_prior,all_fp16,cpu_ema --name=small_prior \
--sample_length=2097152 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=1 --weight_decay=0.01 --save_iters=1000
```
To train the upsampler, we can run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=small_vqvae,small_upsampler,all_fp16,cpu_ema --name=small_upsampler \
--sample_length=262144 --bs=4 --audio_files_dir={audio_files_dir} --labels=False --train --test --aug_shift --aug_blend \
--restore_vqvae=logs/small_vqvae/checkpoint_latest.pth.tar --prior --levels=2 --level=0 --weight_decay=0.01 --save_iters=1000
```
We pass `sample_length = n_ctx * downsample_of_level` so that after downsampling the tokens match the n_ctx of the prior hps.
Here, `n_ctx = 8192` and `downsamples = (32, 256)`, giving `sample_lengths = (8192 * 32, 8192 * 256) = (65536, 2097152)` respectively for the bottom and top level.
### Learning rate annealing
To get the best sample quality anneal the learning rate to 0 near the end of training. To do so, continue training from the latest
checkpoint and run with
```
--restore_prior="path/to/checkpoint" --lr_use_linear_decay --lr_start_linear_decay={already_trained_steps} --lr_decay={decay_steps_as_needed}
```
### Reuse pre-trained VQ-VAE and train top-level prior on new dataset from scratch.
#### Train without labels
Our pre-trained VQ-VAE can produce compressed codes for a wide variety of genres of music, and the pre-trained upsamplers
can upsample them back to audio that sound very similar to the original audio.
To re-use these for a new dataset of your choice, you can retrain just the top-level
To train top-level on a new dataset, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=False --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
Training the `small_prior` with a batch size of 2, 4, and 8 requires 6.7 GB, 9.3 GB, and 15.8 GB of GPU memory, respectively. A few days to a week of training typically yields reasonable samples when the dataset is homogeneous (e.g. all piano pieces, songs of the same style, etc).
Near the end of training, follow [this](#learning-rate-annealing) to anneal the learning rate to 0
#### Sample from new model
You can then run sample.py with the top-level of our models replaced by your new model. To do so,
- Add an entry `my_model=("vqvae", "upsampler_level_0", "upsampler_level_1", "small_prior")` in `MODELS` in `make_models.py`.
- Update the `small_prior` dictionary in `hparams.py` to include `restore_prior='path/to/checkpoint'`. If you
you changed any hps directly in the command line script (eg:`heads`), make sure to update them in the dictionary too so
that `make_models` restores our checkpoint correctly.
- Run sample.py as outlined in the sampling section, but now with `--model=my_model`
For example, let's say we trained `small_vqvae`, `small_prior`, and `small_upsampler` under `/path/to/jukebox/logs`. In `make_models.py`, we are going to declare a tuple of the new models as `my_model`.
```
MODELS = {
'5b': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b"),
'5b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_5b_lyrics"),
'1b_lyrics': ("vqvae", "upsampler_level_0", "upsampler_level_1", "prior_1b_lyrics"),
'my_model': ("my_small_vqvae", "my_small_upsampler", "my_small_prior"),
}
```
Next, in `hparams.py`, we add them to the registry with the corresponding `restore_`paths and any other command line options used during training. Another important note is that for top-level priors with lyric conditioning, we have to locate a self-attention layer that shows alignment between the lyric and music tokens. Look for layers where `prior.prior.transformer._attn_mods[layer].attn_func` is either 6 or 7. If your model is starting to sing along lyrics, it means some layer, head pair has learned alignment. Congrats!
```
my_small_vqvae = Hyperparams(
restore_vqvae='/path/to/jukebox/logs/small_vqvae/checkpoint_some_step.pth.tar',
)
my_small_vqvae.update(small_vqvae)
HPARAMS_REGISTRY["my_small_vqvae"] = my_small_vqvae
my_small_prior = Hyperparams(
restore_prior='/path/to/jukebox/logs/small_prior/checkpoint_latest.pth.tar',
level=1,
labels=False,
# TODO For the two lines below, if `--labels` was used and the model is
# trained with lyrics, find and enter the layer, head pair that has learned
# alignment.
alignment_layer=47,
alignment_head=0,
)
my_small_prior.update(small_prior)
HPARAMS_REGISTRY["my_small_prior"] = my_small_prior
my_small_upsampler = Hyperparams(
restore_prior='/path/to/jukebox/logs/small_upsampler/checkpoint_latest.pth.tar',
level=0,
labels=False,
)
my_small_upsampler.update(small_upsampler)
HPARAMS_REGISTRY["my_small_upsampler"] = my_small_upsampler
```
#### Train with labels
To train with you own metadata for your audio files, implement `get_metadata` in `data/files_dataset.py` to return the
`artist`, `genre` and `lyrics` for a given audio file. For now, you can pass `''` for lyrics to not use any lyrics.
For training with labels, we'll use `small_labelled_prior` in `hparams.py`, and we set `labels=True,labels_v3=True`.
We use 2 kinds of labels information:
- Artist/Genre:
- For each file, we return an artist_id and a list of genre_ids. The reason we have a list and not a single genre_id
is that in v2, we split genres like `blues_rock` into a bag of words `[blues, rock]`, and we pass atmost
`max_bow_genre_size` of those, in `v3` we consider it as a single word and just set `max_bow_genre_size=1`.
- Update the `v3_artist_ids` and `v3_genre_ids` to use ids from your new dataset.
- In `small_labelled_prior`, set the hps `y_bins = (number_of_genres, number_of_artists)` and `max_bow_genre_size=1`.
- Timing:
- For each chunk of audio, we return the `total_length` of the song, the `offset` the current audio chunk is at and
the `sample_length` of the audio chunk. We have three timing embeddings: total_length, our current position, and our
current position as a fraction of the total length, and we divide the range of these values into `t_bins` discrete bins.
- In `small_labelled_prior`, set the hps `min_duration` and `max_duration` to be the shortest/longest duration of audio
files you want for your dataset, and `t_bins` for how many bins you want to discretize timing information into. Note
`min_duration * sr` needs to be at least `sample_length` to have an audio chunk in it.
After these modifications, to train a top-level with labels, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_labelled_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_prior_labels \
--sample_length=1048576 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
For sampling, follow same instructions as [above](#sample-from-new-model) but use `small_labelled_prior` instead of `small_prior`.
#### Train with lyrics
To train in addition with lyrics, update `get_metadata` in `data/files_dataset.py` to return `lyrics` too.
For training with lyrics, we'll use `small_single_enc_dec_prior` in `hparams.py`.
- Lyrics:
- For each file, we linearly align the lyric characters to the audio, find the position in lyric that corresponds to
the midpoint of our audio chunk, and pass a window of `n_tokens` lyric characters centred around that.
- In `small_single_enc_dec_prior`, set the hps `use_tokens=True` and `n_tokens` to be the number of lyric characters
to use for an audio chunk. Set it according to the `sample_length` you're training on so that its large enough that
the lyrics for an audio chunk are almost always found inside a window of that size.
- If you use a non-English vocabulary, update `text_processor.py` with your new vocab and set
`n_vocab = number of characters in vocabulary` accordingly in `small_single_enc_dec_prior`. In v2, we had a `n_vocab=80`
and in v3 we missed `+` and so `n_vocab=79` of characters.
After these modifications, to train a top-level with labels and lyrics, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,small_single_enc_dec_prior,all_fp16,cpu_ema --name=pretrained_vqvae_small_single_enc_dec_prior_labels \
--sample_length=786432 --bs=4 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
To simplify hps choices, here we used a `single_enc_dec` model like the `1b_lyrics` model that combines both encoder and
decoder of the transformer into a single model. We do so by merging the lyric vocab and vq-vae vocab into a single
larger vocab, and flattening the lyric tokens and the vq-vae codes into a single sequence of length `n_ctx + n_tokens`.
This uses `attn_order=12` which includes `prime_attention` layers with keys/values from lyrics and queries from audio.
If you instead want to use a model with the usual encoder-decoder style transformer, use `small_sep_enc_dec_prior`.
For sampling, follow same instructions as [above](#sample-from-new-model) but use `small_single_enc_dec_prior` instead of
`small_prior`. To also get the alignment between lyrics and samples in the saved html, you'll need to set `alignment_layer`
and `alignment_head` in `small_single_enc_dec_prior`. To find which layer/head is best to use, run a forward pass on a training example,
save the attention weight tensors for all prime_attention layers, and pick the (layer, head) which has the best linear alignment
pattern between the lyrics keys and music queries.
### Fine-tune pre-trained top-level prior to new style(s)
Previously, we showed how to train a small top-level prior from scratch. Assuming you have a GPU with at least 15 GB of memory and support for fp16, you could fine-tune from our pre-trained 1B top-level prior. Here are the steps:
- Support `--labels=True` by implementing `get_metadata` in `jukebox/data/files_dataset.py` for your dataset.
- Add new entries in `jukebox/data/ids`. We recommend replacing existing mappings (e.g. rename `"unknown"`, etc with styles of your choice). This uses the pre-trained style vectors as initialization and could potentially save some compute.
After these modifications, run
```
mpiexec -n {ngpus} python jukebox/train.py --hps=vqvae,prior_1b_lyrics,all_fp16,cpu_ema --name=finetuned \
--sample_length=1048576 --bs=1 --aug_shift --aug_blend --audio_files_dir={audio_files_dir} \
--labels=True --train --test --prior --levels=3 --level=2 --weight_decay=0.01 --save_iters=1000
```
To get the best sample quality, it is recommended to anneal the learning rate in the end. Training the 5B top-level requires GPipe which is not supported in this release.
# Citation
Please cite using the following bibtex entry:
```
@article{dhariwal2020jukebox,
title={Jukebox: A Generative Model for Music},
author={Dhariwal, Prafulla and Jun, Heewoo and Payne, Christine and Kim, Jong Wook and Radford, Alec and Sutskever, Ilya},
journal={arXiv preprint arXiv:2005.00341},
year={2020}
}
```
# License
[Noncommercial Use License](./LICENSE)
It covers both released code and weights.
|
Beegbrain/Reinforce-model-pixelcopter
|
Beegbrain
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gudjonk93/IceBERT-finetuned-squad-10
|
gudjonk93
|
roberta
| 11 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
agpl-3.0
| null |
['icelandic-qa-n_qi_i']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,622 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-squad-10
This model is a fine-tuned version of [mideind/IceBERT](https://huggingface.co/mideind/IceBERT) on the icelandic-qa-n_qi_i dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 293 | 1.7614 |
| 1.9509 | 2.0 | 586 | 1.5208 |
| 1.9509 | 3.0 | 879 | 1.5011 |
| 0.9529 | 4.0 | 1172 | 1.5694 |
| 0.9529 | 5.0 | 1465 | 1.7516 |
| 0.6647 | 6.0 | 1758 | 1.8629 |
| 0.4336 | 7.0 | 2051 | 1.8881 |
| 0.4336 | 8.0 | 2344 | 2.0768 |
| 0.335 | 9.0 | 2637 | 2.1238 |
| 0.335 | 10.0 | 2930 | 2.1511 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
raquelsmv/clasificador-rotten_tomatoes
|
raquelsmv
|
electra
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,361 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten_tomatoes
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4363
- Accuracy: 0.9138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4625 | 1.0 | 853 | 0.3543 | 0.9027 |
| 0.2407 | 2.0 | 1706 | 0.3710 | 0.9115 |
| 0.0962 | 3.0 | 2559 | 0.4363 | 0.9138 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
rossHuggingMay/ppo-LunarLander-v2
|
rossHuggingMay
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Beegbrain/ppo-Snowball
|
Beegbrain
| null | 20 | 6 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 850 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: Beegbrain/ppo-Snowball
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
relbert/relbert-roberta-large-nce-b-nell
|
relbert
|
roberta
| 30 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false | null | null |
['relbert/nell_relational_similarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 4,786 |
# relbert/relbert-roberta-large-nce-b-nell
RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-b-nell/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.4090909090909091
- Accuracy on SAT: 0.41543026706231456
- Accuracy on BATS: 0.5330739299610895
- Accuracy on U2: 0.42543859649122806
- Accuracy on U4: 0.44907407407407407
- Accuracy on Google: 0.768
- Accuracy on ConceptNet Analogy: 0.1610738255033557
- Accuracy on T-Rex Analogy: 0.6502732240437158
- Accuracy on NELL-ONE Analogy: 0.8383333333333334
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-b-nell/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9092963688413441
- Micro F1 score on CogALexV: 0.8267605633802817
- Micro F1 score on EVALution: 0.624593716143012
- Micro F1 score on K&H+N: 0.9522848994922446
- Micro F1 score on ROOT09: 0.88937637104356
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-b-nell/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7194642857142857
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-nce-b-nell")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-large
- max_length: 64
- epoch: 10
- batch: 31
- random_seed: 0
- lr: 5e-06
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/nell_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: nce
- classification_loss: False
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10}
- augment_negative_by_positive: True
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-b-nell/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
|
PeterDerLustige/poca-SoccerTwos
|
PeterDerLustige
| null | 30 | 650 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 849 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: PeterDerLustige/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cleanrl/Pong-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,136 |
# (CleanRL) **PPO** Agent Playing **Pong-v5**
This is a trained model of a PPO agent playing Pong-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Pong-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Pong-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Pong-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Pong-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
khatkeashish/ppo-SnowballTarget
|
khatkeashish
| null | 20 | 4 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
| false | true | true | 859 |
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: khatkeashish/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wooihen/a2c-AntBulletEnv-v0
|
wooihen
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nakayama/DeDeDeP
|
nakayama
| null | 22 | 20 |
diffusers
| 1 |
text-to-image
| false | false | false |
other
|
['-en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 7,384 |
# ご利用の際は下記のライセンス内容を十分にご確認ください。
DeDeDePは元にしたモデルである[DeDeDe](https://huggingface.co/nakayama/DeDeDe)と比較して、よりフォトリアルなアニメ調の画像を出力しやすいように調整されたStable Diffusionモデルです。
[DreamLike Diffusion 1.0](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0)、[Trinart Characters v2 Derrida](https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion) 、[DreamLike Photoreal 1.0](https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0)をベースとしたDeDeDeに、さらにIN06から11とOUT00から05の部分をDreamlike Photoreal1.0で階層マージを用いて編集したものになります。
| Model: A | Model: B | Weight | Base alpha | Merge Name |
| --- | --- | --- | --- | --- |
| DeDeDe(6d1729a039) | Dreamlike Photoreal 1.0(f403e4e2a5) | 0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1,0,0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1 | 0 | DeDeDeP(ad14700f28) |
利用の際は以下のPrompt/Negative Promptをおすすめします。
P: best quality, masterpiece
NP: 3d, flat shading, flat color, retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb
# 例
<img src="https://huggingface.co/nakayama/DeDeDeP/resolve/main/img/image01.png" style="max-width:400px;" width="50%"/>
```
(((best quality, masterpiece))), detailed ((anime)) style of 1girl cowboy shot with detailed wavy pink hair pink and detailed yellow eye yellow in summer London river with picturesque, cinematic lighting, dynamic angle
Negative prompt: [[3d]], (((((flat shading, flat color))))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb,bokeh
Steps: 25, Sampler: DDIM, CFG scale: 8, Seed: 3524697970, Size: 512x768, Model hash: ad14700f28, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Hires resize: 768x1152, Hires steps: 25, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/DeDeDeP/resolve/main/img/image02.png" style="max-width:400px;" width="50%"/>
```
(((best quality, masterpiece))), detailed ((anime)) style of idol 1girl with detailed twintail green hair green and detailed green
Negative prompt: [[3d]], (((((flat shading, flat color))))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb
Steps: 25, Sampler: DDIM, CFG scale: 8, Seed: 722467098, Size: 768x512, Model hash: ad14700f28, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Hires resize: 1152x768, Hires steps: 25, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/DeDeDeP/resolve/main/img/image03.png" style="max-width:400px;" width="50%"/>
```
best quality, masterpiece, detailed ((anime)) style of 1girl cowboy shot from front look at viewer and traditional japanese landscape, scenic view and lensflare
Negative prompt: flat shading, flat color, retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb, bokeh, dynamic pose
Steps: 20, Sampler: DDIM, CFG scale: 7, Seed: 546571640, Size: 768x512, Model hash: ad14700f28, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/DeDeDeP/resolve/main/img/image04.png" style="max-width:400px;" width="50%"/>
```
(((best quality, masterpiece))), detailed ((anime)) style of teenage 1boy wizard bust shot casting fire magic spell with fire fist in New York City, picturesque, golden hour, dynamic pose with iron gauntlet
Negative prompt: [[3d]], (((((flat shading, flat color))))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb and digit and hand, (((dynamic pose))), bokeh
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 726934798, Size: 512x768, Model hash: ad14700f28, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
<img src="https://huggingface.co/nakayama/DeDeDeP/resolve/main/img/image05.png" style="max-width:400px;" width="50%"/>
```
(((best quality, masterpiece))), detailed ((anime)) style of old man sitting on the chair and looking at viewer with (((intricate hand and digit))) at 1960s in his old room
Negative prompt: [[3d]], (((((flat shading, flat color))))), retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, inaccurate limb
Steps: 25, Sampler: DDIM, CFG scale: 8, Seed: 672383321, Size: 768x512, Model hash: ad14700f28, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Hires resize: 1152x768, Hires steps: 25, Hires upscaler: R-ESRGAN 4x+ Anime6B
```
# 備考
SNSなどに出力した作品をアップロードする際に、タグなどの機能があれば #DeDeDeArt などをつけていただければ嬉しいです。
私が見に行くので。
# ライセンスについて
当モデルはDreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0の影響下にあるため、上記モデルにおける**修正された**CreativeML OpenRAIL-M licenseが適用されます。
以下はDeepLで翻訳された修正分の日本語訳となりますが、解釈において優先される言語は英語となります。
- **あなたが収入や寄付を得る、または得る予定のウェブサイト/アプリ/その他で、このモデルやその派生物をホストしたり使用したりすることはできません。もしそうしたいのなら、[email protected] までメールしてください。**
- **あなたは、モデルカードとファイル(実際の推論やファインチューニングを伴わない)を、商用および非商用のウェブサイト/アプリ/その他に自由にホストすることができます。完全なモデル名(Dreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0)を明記し、モデルカードへのリンク( https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 / https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/ )を含めてください。**
- **完全に非商用なウェブサイトやアプリなどで、モデルやその派生物を自由にホストすることができます(収益や寄付を一切得ていないことを意味します)。完全なモデル名(Dreamlike Diffusion 1.0 / Dreamlike Photoreal 1.0)を明記し、モデルカード( https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 / https://huggingface.co/dreamlike-art/dreamlike-photoreal-1.0/ )へのリンクを添付してください。**
- **10人以下のチームで、モデルの出力またはモデルの派生物の出力を商業目的で自由に使用することができます。**
- あなたは、違法または有害な出力やコンテンツを意図的に作成したり共有したりするために、このモデルを使用することはできません。
- あなたが生成した出力について、作者はいかなる権利も主張しません。あなたはそれらを自由に使用することができ、ライセンスで定められた規定に反してはならないその使用について責任を負います。
- あなたはウェイトを再配布することができます。再配布する場合は、ライセンスにあるものと同じ使用制限を含め、**修正した**CreativeML OpenRAIL-Mのコピーをすべてのユーザーと共有しなければならないことに注意してください(ライセンスを完全にかつ慎重にお読みください) ライセンス全文はこちらでご覧ください:https://huggingface.co/nakayama/DeDeDeP/blob/main/License.md
|
minoosh/wav2vec2-base-finetuned-ie
|
minoosh
|
wav2vec2
| 27 | 2 |
transformers
| 0 |
audio-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,219 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ie
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.5355
- eval_accuracy: 0.4318
- eval_runtime: 111.662
- eval_samples_per_second: 17.983
- eval_steps_per_second: 0.564
- epoch: 8.38
- step: 520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ALANZI/imamu_arabic_sentimentAnalysis
|
ALANZI
|
bert
| 8 | 91 |
transformers
| 0 |
text-classification
| true | false | false | null |
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 613 |
<p dir="rtl"> يقوم هذا النموذج بتصنيف النصوص العربيه الى ٣ تصنيفات : </p>
<ul dir="rtl">
<li>- ايجابي
</li>
<li>- محايد </li>
<li>- سلبي </li>
</ul >
<p dir="rtl"> تم بناء هذا النموذج باستخدام مجموعة بيانات عربيه مصنفه الى ثلاث تصنيفات ( ايجابي ، محايد ، سلبي ) حيث يحتوي كل تصنيف على 30646 نص . </p>
<p dir="rtl"> - دقة النموذج ٨٤% </p>
<p dir="rtl"> تم انشاء هذا النموذج من قبل طلاب جامعة الامام محمد بن سعود الاسلامية : </p>
<ul dir="rtl">
<li>- عبدالرحمن عقاب العنزي
</li>
<li>- زياد محمد العنزي</li>
<li>- يوسف خالد التركي
</li>
</ul >
<p dir="rtl"> باشراف الدكتور : زياد الشيخ </p>
|
mantury/ppo-LunarLander-v2
|
mantury
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **MLP** Agent playing **LunarLander-v2**
This is a trained model of a **MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Erwanlbv/q-FrozenLake-v1-4x4-noSlippery
|
Erwanlbv
| null | 5 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 397 |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Erwanlbv/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
relbert/relbert-roberta-large-nce-c-nell
|
relbert
|
roberta
| 30 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false | null | null |
['relbert/nell_relational_similarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 4,775 |
# relbert/relbert-roberta-large-nce-c-nell
RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-nell/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.42780748663101603
- Accuracy on SAT: 0.4391691394658754
- Accuracy on BATS: 0.48360200111172874
- Accuracy on U2: 0.39473684210526316
- Accuracy on U4: 0.4513888888888889
- Accuracy on Google: 0.728
- Accuracy on ConceptNet Analogy: 0.1266778523489933
- Accuracy on T-Rex Analogy: 0.6557377049180327
- Accuracy on NELL-ONE Analogy: 0.78
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-nell/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9044749133644719
- Micro F1 score on CogALexV: 0.8328638497652581
- Micro F1 score on EVALution: 0.6316359696641387
- Micro F1 score on K&H+N: 0.9655700076511095
- Micro F1 score on ROOT09: 0.8928235662801629
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-c-nell/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7407142857142857
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-nce-c-nell")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-large
- max_length: 64
- epoch: 10
- batch: 31
- random_seed: 0
- lr: 5e-06
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/nell_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: nce
- classification_loss: False
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10}
- augment_negative_by_positive: True
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-c-nell/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
|
Amiko/ppo-Huggy
|
Amiko
| null | 36 | 10 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
| false | true | true | 816 |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Amiko/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/Pong-v4-DQPN_p2_e0.10-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,979 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p2_e0.10.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p2_e0.10]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p2_e0.10 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.10-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.10-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.10-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p2_e0.10 --start-policy-f 2000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.1,
'exp_name': 'DQPN_p2_e0.10',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 2000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Mizuiro-sakura/luke-large-commonsenseqa-japanese
|
Mizuiro-sakura
|
luke
| 13 | 3 |
transformers
| 0 |
multiple-choice
| true | false | false |
mit
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['luke', 'pytorch', 'transformers', 'commonsenseqa', 'commonsense-qa', 'CommonsenseQA', 'commonsense_qa', 'jcommonsenseqa']
| false | true | true | 3,014 |
# このモデルはluke-japanese-largeをファインチューニングして、JCommonsenseQA(選択式応答)に用いれるようにしたものです。
このモデルはluke-japanese-largeを
yahoo japan/JGLUEのJCommonsenseQA( https://github.com/yahoojapan/JGLUE )
を用いてファインチューニングしたものです。
選択式の質問応答タスクに用いることができます。
# This model is fine-tuned model for commonsenseqa which is based on luke-japanese-large
This model is fine-tuned by using yahoo japan JGLUE JCommonsenseQA dataset.
You could use this model for commonsenseqa tasks.
# モデルの精度 accuracy of model
モデルの精度は
83.82484361036744
でした。他の言語モデルと比べても非常に高い値となっています。
(参考 BERT:72.0、XLM RoBERTa base:68.7)
# How to use 使い方
transformers, sentencepieceをinstallして、以下のコードを実行することで、commonsenseqaタスクを解かせることができます。
please execute this code.
```python
from transformers import AutoTokenizer, AutoModelForMultipleChoice
import torch
import numpy as np
# modelのロード
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-large-commonsenseqa-japanese')
model = AutoModelForMultipleChoice.from_pretrained('Mizuiro-sakura/luke-large-commonsenseqa-japanese')
# 質問と選択肢の代入
question = '電子機器で使用される最も主要な電子回路基板の事をなんと言う?'
choice1 = '掲示板'
choice2 = 'パソコン'
choice3 = 'マザーボード'
choice4 = 'ハードディスク'
choice5 = 'まな板'
# トークン化(エンコーディング・形態素解析)する
token = tokenizer([question,question,question,question,question],[choice1,choice2,choice3,choice4,choice5],return_tensors='pt',padding=True)
leng=len(token['input_ids'][0])
# modelに入力するための下準備
X1 = np.empty(shape=(1, 5, leng))
X2 = np.empty(shape=(1, 5, leng))
X1[0, :, :] = token['input_ids']
X2[0, :, :] = token['attention_mask']
# modelにトークンを入力する
results = model(torch.tensor(X1).to(torch.int64),torch.tensor(X2).to(torch.int64))
# 最も高い値のインデックスを取得する
max_result=torch.argmax(results.logits)
print(max_result)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
gabriellabollici/clasificador-rottentomatoes
|
gabriellabollici
|
electra
| 10 | 2 |
transformers
| 0 |
text-classification
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['classification', 'generated_from_trainer']
| true | true | true | 1,372 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rottentomatoes
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0103
- Accuracy: 0.4783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6937 | 1.0 | 853 | 0.8311 | 0.0 |
| 0.6578 | 2.0 | 1706 | 0.7352 | 0.6190 |
| 0.5328 | 3.0 | 2559 | 1.0103 | 0.4783 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
pfunk/Pong-v4-DQPN_p2_e0.25-seed1
|
pfunk
| null | 11 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 1,980 |
# (CleanRL) **DQN** Agent Playing **Pong-v4**
This is a trained model of a DQN agent playing Pong-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p2_e0.25.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[DQPN_p2_e0.25]"
python -m cleanrl_utils.enjoy --exp-name DQPN_p2_e0.25 --env-id Pong-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.25-seed1/raw/main/dqpn_atari.py
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.25-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p2_e0.25-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqpn_atari.py --exp-name DQPN_p2_e0.25 --start-policy-f 2000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'end_e': 0.01,
'end_policy_f': 1000,
'env_id': 'Pong-v4',
'evaluation_fraction': 0.25,
'exp_name': 'DQPN_p2_e0.25',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 80000,
'policy_tau': 1.0,
'save_model': True,
'seed': 1,
'start_e': 1,
'start_policy_f': 2000,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
dfm794/poca-SoccerTwos-baseline
|
dfm794
| null | 20 | 671 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 849 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: dfm794/poca-SoccerTwos-baseline
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Amitesh007/finetuned-eng-hi-translation
|
Amitesh007
|
marian
| 12 | 0 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,210 |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Amitesh007/finetuned-eng-hi-translation
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7521
- Validation Loss: 0.6740
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7521 | 0.6740 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kinkpunk/poca-MLAgents-SoccerTwos-v0.9
|
kinkpunk
| null | 20 | 634 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 864 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn .\config\poca\SoccerTwos.yaml --run-id="poca-SoccerTwos-v0.9" --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: kinkpunk/poca-MLAgents-SoccerTwos-v0.9
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Beegbrain/dqn-SpaceInvadersNoFrameskip-v4
|
Beegbrain
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,220 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Beegbrain -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Beegbrain -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Beegbrain
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
eldraco/poca-SoccerTwos-RoyKent
|
eldraco
| null | 20 | 633 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 849 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: eldraco/poca-SoccerTwos-RoyKent
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
relbert/relbert-roberta-large-nce-d-nell
|
relbert
|
roberta
| 30 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false | null | null |
['relbert/nell_relational_similarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 4,788 |
# relbert/relbert-roberta-large-nce-d-nell
RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-nell/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.4358288770053476
- Accuracy on SAT: 0.44510385756676557
- Accuracy on BATS: 0.5441912173429683
- Accuracy on U2: 0.4605263157894737
- Accuracy on U4: 0.46296296296296297
- Accuracy on Google: 0.804
- Accuracy on ConceptNet Analogy: 0.17281879194630873
- Accuracy on T-Rex Analogy: 0.6721311475409836
- Accuracy on NELL-ONE Analogy: 0.8383333333333334
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-nell/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9004068103058611
- Micro F1 score on CogALexV: 0.822300469483568
- Micro F1 score on EVALution: 0.6397616468039004
- Micro F1 score on K&H+N: 0.9600751199833066
- Micro F1 score on ROOT09: 0.8859291758069571
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-nell/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7815079365079365
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-nce-d-nell")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-large
- max_length: 64
- epoch: 10
- batch: 31
- random_seed: 0
- lr: 5e-06
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/nell_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: nce
- classification_loss: False
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10}
- augment_negative_by_positive: True
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-d-nell/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
|
khatkeashish/ppo-PyramidsRND
|
khatkeashish
| null | 16 | 1 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
| false | true | true | 838 |
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: khatkeashish/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cfisicaro/dqn-SpaceInvadersNoFrameskip-v4
|
cfisicaro
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,220 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cfisicaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cfisicaro -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cfisicaro
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
dandrade/jlg-model
|
dandrade
|
gpt2
| 12 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
|
['es']
|
['dandrade/canciones_juan_luis_guerra']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,259 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jlg-model
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 42 | 3.5391 |
| No log | 2.0 | 84 | 3.5001 |
| No log | 3.0 | 126 | 3.4882 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
relbert/relbert-roberta-large-nce-e-nell
|
relbert
|
roberta
| 30 | 6 |
transformers
| 0 |
feature-extraction
| true | false | false | null | null |
['relbert/nell_relational_similarity']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 4,778 |
# relbert/relbert-roberta-large-nce-e-nell
RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
This model achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-nell/raw/main/analogy.forward.json)):
- Accuracy on SAT (full): 0.41711229946524064
- Accuracy on SAT: 0.42136498516320475
- Accuracy on BATS: 0.5258476931628683
- Accuracy on U2: 0.42543859649122806
- Accuracy on U4: 0.44212962962962965
- Accuracy on Google: 0.748
- Accuracy on ConceptNet Analogy: 0.15771812080536912
- Accuracy on T-Rex Analogy: 0.6830601092896175
- Accuracy on NELL-ONE Analogy: 0.865
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-nell/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8869971372608106
- Micro F1 score on CogALexV: 0.7976525821596244
- Micro F1 score on EVALution: 0.5926327193932828
- Micro F1 score on K&H+N: 0.9606315643040968
- Micro F1 score on ROOT09: 0.8746474459417111
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-e-nell/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.5538095238095238
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-large-nce-e-nell")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
```
### Training hyperparameters
- model: roberta-large
- max_length: 64
- epoch: 10
- batch: 31
- random_seed: 0
- lr: 5e-06
- lr_warmup: 10
- aggregation_mode: average_no_mask
- data: relbert/nell_relational_similarity
- data_name: None
- exclude_relation: None
- split: train
- split_valid: validation
- loss_function: nce
- classification_loss: False
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 10}
- augment_negative_by_positive: True
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-e-nell/raw/main/finetuning_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
```
@inproceedings{ushio-etal-2021-distilling,
title = "Distilling Relation Embeddings from Pretrained Language Models",
author = "Ushio, Asahi and
Camacho-Collados, Jose and
Schockaert, Steven",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.712",
doi = "10.18653/v1/2021.emnlp-main.712",
pages = "9044--9062",
abstract = "Pre-trained language models have been found to capture a surprisingly rich amount of lexical knowledge, ranging from commonsense properties of everyday concepts to detailed factual knowledge about named entities. Among others, this makes it possible to distill high-quality word vectors from pre-trained language models. However, it is currently unclear to what extent it is possible to distill relation embeddings, i.e. vectors that characterize the relationship between two words. Such relation embeddings are appealing because they can, in principle, encode relational knowledge in a more fine-grained way than is possible with knowledge graphs. To obtain relation embeddings from a pre-trained language model, we encode word pairs using a (manually or automatically generated) prompt, and we fine-tune the language model such that relationally similar word pairs yield similar output vectors. We find that the resulting relation embeddings are highly competitive on analogy (unsupervised) and relation classification (supervised) benchmarks, even without any task-specific fine-tuning. Source code to reproduce our experimental results and the model checkpoints are available in the following repository: https://github.com/asahi417/relbert",
}
```
|
wooihen/a2c-PandaReachDense-v2
|
wooihen
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 358 |
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lora-library/yarosnnv
|
lora-library
| null | 6 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
| false | true | true | 318 |
# LoRA DreamBooth - yarosnnv
These are LoRA adaption weights for [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). The weights were trained on the instance prompt "yarosnnv" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
Youngdal/ppo-LunarLander-v2
|
Youngdal
| null | 12 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 350 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DaniilSirota/Reinforce_PG
|
DaniilSirota
| null | 6 | 0 | null | 1 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 286 |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
alitavanaali/music_layoutlmv3_model
|
alitavanaali
|
layoutlmv3
| 24 | 27 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-nc-sa-4.0
| null |
['sroie']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,194 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# music_layoutlmv3_model
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0085
- Precision: 0.9694
- Recall: 0.9694
- F1: 0.9694
- Accuracy: 0.9987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 12.5 | 100 | 0.0094 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| No log | 25.0 | 200 | 0.0076 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| No log | 37.5 | 300 | 0.0079 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| No log | 50.0 | 400 | 0.0079 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0412 | 62.5 | 500 | 0.0080 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0412 | 75.0 | 600 | 0.0083 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0412 | 87.5 | 700 | 0.0083 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0412 | 100.0 | 800 | 0.0084 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0412 | 112.5 | 900 | 0.0084 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
| 0.0005 | 125.0 | 1000 | 0.0085 | 0.9694 | 0.9694 | 0.9694 | 0.9987 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.2.2
- Tokenizers 0.13.2
|
krenerd/msmarco-distilbert-cos-v5_en-ko-ja_10epoch
|
krenerd
|
xlm-roberta
| 17 | 48 |
sentence-transformers
| 1 |
sentence-similarity
| true | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,642 |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11258 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bhuvanesh25/whis-tam-small
|
bhuvanesh25
|
whisper
| 12 | 181 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ta']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,992 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ta - Bharat Ramanathan (Kudos to him for developing it)
# This is a copy of his model for academic purpose.
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1803
- Wer: 17.1456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3374 | 0.1 | 500 | 0.2579 | 23.3804 |
| 0.29 | 0.2 | 1000 | 0.2260 | 20.9937 |
| 0.2522 | 0.3 | 1500 | 0.2139 | 20.0682 |
| 0.2338 | 0.4 | 2000 | 0.2025 | 19.6785 |
| 0.223 | 0.5 | 2500 | 0.1979 | 18.3147 |
| 0.211 | 0.6 | 3000 | 0.1927 | 17.8276 |
| 0.2032 | 0.7 | 3500 | 0.1865 | 17.3892 |
| 0.1978 | 0.8 | 4000 | 0.1839 | 17.5353 |
| 0.1972 | 0.9 | 4500 | 0.1812 | 17.0969 |
| 0.1894 | 1.0 | 5000 | 0.1803 | 17.1456 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
mktz/Reinforce-PixelCopter
|
mktz
| null | 6 | 0 | null | 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
| true | true | true | 300 |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JYC333/poca-SoccerTwos-v1
|
JYC333
| null | 28 | 1,160 |
ml-agents
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
| false | true | true | 840 |
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: JYC333/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
splusminusx/dqn-SpaceInvadersNoFrameskip-v4
|
splusminusx
| null | 15 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,226 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga splusminusx -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga splusminusx -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga splusminusx
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
darthrevenge/dqn-SpaceInvadersNoFrameskip-v4
|
darthrevenge
| null | 14 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 2,230 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga darthrevenge -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga darthrevenge -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga darthrevenge
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
cleanrl/Asteroids-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Asteroids-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Asteroids-v5**
This is a trained model of a PPO agent playing Asteroids-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Asteroids-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Asteroids-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Asteroids-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Asteroids-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Frostbite-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Frostbite-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Boxing-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Boxing-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,152 |
# (CleanRL) **PPO** Agent Playing **Boxing-v5**
This is a trained model of a PPO agent playing Boxing-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Boxing-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Boxing-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/FishingDerby-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['FishingDerby-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,200 |
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5**
This is a trained model of a PPO agent playing FishingDerby-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id FishingDerby-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'FishingDerby-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Beegbrain/a2c-AntBulletEnv-v0
|
Beegbrain
| null | 13 | 0 |
stable-baselines3
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
| true | true | true | 352 |
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cleanrl/BattleZone-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BattleZone-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,184 |
# (CleanRL) **PPO** Agent Playing **BattleZone-v5**
This is a trained model of a PPO agent playing BattleZone-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id BattleZone-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BattleZone-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/WizardOfWor-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['WizardOfWor-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,192 |
# (CleanRL) **PPO** Agent Playing **WizardOfWor-v5**
This is a trained model of a PPO agent playing WizardOfWor-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id WizardOfWor-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'WizardOfWor-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Alien-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Alien-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,144 |
# (CleanRL) **PPO** Agent Playing **Alien-v5**
This is a trained model of a PPO agent playing Alien-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Alien-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Alien-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Alien-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Alien-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Gravitar-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Gravitar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Gravitar-v5**
This is a trained model of a PPO agent playing Gravitar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Gravitar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Gravitar-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Riverraid-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Riverraid-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Riverraid-v5**
This is a trained model of a PPO agent playing Riverraid-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Riverraid-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Riverraid-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Riverraid-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Riverraid-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/PrivateEye-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['PrivateEye-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,184 |
# (CleanRL) **PPO** Agent Playing **PrivateEye-v5**
This is a trained model of a PPO agent playing PrivateEye-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id PrivateEye-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/PrivateEye-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id PrivateEye-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'PrivateEye-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ChopperCommand-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,216 |
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5**
This is a trained model of a PPO agent playing ChopperCommand-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id ChopperCommand-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'ChopperCommand-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/YarsRevenge-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['YarsRevenge-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,192 |
# (CleanRL) **PPO** Agent Playing **YarsRevenge-v5**
This is a trained model of a PPO agent playing YarsRevenge-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id YarsRevenge-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/YarsRevenge-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id YarsRevenge-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'YarsRevenge-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Tutankham-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Tutankham-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Tutankham-v5**
This is a trained model of a PPO agent playing Tutankham-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Tutankham-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Tutankham-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Tutankham-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Kangaroo-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,168 |
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5**
This is a trained model of a PPO agent playing Kangaroo-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Kangaroo-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Kangaroo-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Centipede-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Centipede-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Centipede-v5**
This is a trained model of a PPO agent playing Centipede-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Centipede-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Centipede-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Jamesbond-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Jamesbond-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5**
This is a trained model of a PPO agent playing Jamesbond-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Jamesbond-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Jamesbond-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/BankHeist-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BankHeist-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,176 |
# (CleanRL) **PPO** Agent Playing **BankHeist-v5**
This is a trained model of a PPO agent playing BankHeist-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id BankHeist-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id BankHeist-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'BankHeist-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
cleanrl/Solaris-v5-sebulba_ppo_envpool-seed1
|
cleanrl
| null | 9 | 0 |
cleanrl
| 0 |
reinforcement-learning
| false | false | false | null | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Solaris-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
| true | true | true | 2,160 |
# (CleanRL) **PPO** Agent Playing **Solaris-v5**
This is a trained model of a PPO agent playing Solaris-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Solaris-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Solaris-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Solaris-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Solaris-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Solaris-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Solaris-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.