repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
jojoUla/bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR10-labelled-40
jojoUla
bert
17
0
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,297
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-support-no-label-40-sigir-tune2nd-LR10-labelled-40 This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-no-label-40) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4091 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.0728 | 1.0 | 1 | 3.3883 | | 3.6431 | 2.0 | 2 | 4.0303 | | 3.4087 | 3.0 | 3 | 2.1767 | | 2.522 | 4.0 | 4 | 2.3348 | | 1.8187 | 5.0 | 5 | 0.7921 | | 1.5562 | 6.0 | 6 | 1.6986 | | 1.505 | 7.0 | 7 | 1.7494 | | 1.4673 | 8.0 | 8 | 1.5797 | | 1.22 | 9.0 | 9 | 1.7811 | | 1.5497 | 10.0 | 10 | 2.0455 | | 0.8699 | 11.0 | 11 | 2.7731 | | 1.6008 | 12.0 | 12 | 2.3984 | | 0.9909 | 13.0 | 13 | 1.7870 | | 1.4982 | 14.0 | 14 | 1.5336 | | 0.88 | 15.0 | 15 | 0.5394 | | 0.5231 | 16.0 | 16 | 0.5391 | | 1.1294 | 17.0 | 17 | 1.2333 | | 1.5638 | 18.0 | 18 | 1.4246 | | 1.5274 | 19.0 | 19 | 0.7396 | | 1.1525 | 20.0 | 20 | 0.7160 | | 0.7708 | 21.0 | 21 | 3.9853 | | 0.6681 | 22.0 | 22 | 1.8747 | | 0.6073 | 23.0 | 23 | 1.0765 | | 0.64 | 24.0 | 24 | 0.7888 | | 1.3657 | 25.0 | 25 | 1.0972 | | 1.1772 | 26.0 | 26 | 0.6801 | | 1.6493 | 27.0 | 27 | 0.8378 | | 0.8971 | 28.0 | 28 | 0.5728 | | 1.3524 | 29.0 | 29 | 1.7829 | | 0.7754 | 30.0 | 30 | 1.8142 | | 1.1628 | 31.0 | 31 | 0.7712 | | 0.4534 | 32.0 | 32 | 1.3779 | | 0.6799 | 33.0 | 33 | 1.0512 | | 1.2813 | 34.0 | 34 | 0.5455 | | 0.6709 | 35.0 | 35 | 1.8824 | | 0.4398 | 36.0 | 36 | 2.1419 | | 0.1491 | 37.0 | 37 | 1.2215 | | 0.7378 | 38.0 | 38 | 1.7122 | | 0.657 | 39.0 | 39 | 1.4764 | | 0.9551 | 40.0 | 40 | 0.5116 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Ransaka/cartpole-v1
Ransaka
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
aisingapore/coherence-momentum
aisingapore
null
4
0
transformers
0
feature-extraction
true
false
false
mit
['en']
['wall-street-journal']
null
0
0
0
0
0
0
0
['coherence', 'feature-extraction']
true
true
true
5,280
# Coherence Modelling You can **test the model** at [coherence modelling](https://huggingface.co/spaces/aisingapore/coherence-modelling).<br /> If you want to find out more information, please contact us at [email protected]. ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Model Parameters](#parameters) - [Other Information](#other-information) ## Model Details **Model Name:** Coherence-Momentum - **Description:** This is a neural network model that makes use of a momentum encoder and hard negative mining during training. This model is able to take in a piece of text and output a coherence score. The coherence score is only meant for comparison, i.e. it is only meaningful when used to compare between two texts, and the text with the higher coherence score is deemed to be more coherent by the model. - **Paper:** Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), May 2022 (pp. 6044-6059). - **Author(s):** Jwalapuram, P., Joty, S., & Lin, X. (2022). - **URL:** https://aclanthology.org/2022.acl-long.418/ # How to Get Started With the Model ## Install Python package SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry. <br><br> Various NLP models, other than aspect sentiment analysis are available in the python package. You can try them out at [SGNLP-Demo](https://sgnlp.aisingapore.net/) | [SGNLP-Github](https://github.com/aisingapore/sgnlp). ```python pip install sgnlp ``` ## Examples For more full code (such as Coherence-Momentum), please refer to this [github](https://github.com/aisingapore/sgnlp). <br> Alternatively, you can also try out the [demo](https://huggingface.co/spaces/aisingapore/coherence-modelling) for Coherence-Momentum. Example of Coherence Momentum modelling: ```python from sgnlp.models.coherence_momentum import CoherenceMomentumModel, CoherenceMomentumConfig, \ CoherenceMomentumPreprocessor # Load Model config = CoherenceMomentumConfig.from_pretrained( "https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json" ) model = CoherenceMomentumModel.from_pretrained( "https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin", config=config ) preprocessor = CoherenceMomentumPreprocessor(config.model_size, config.max_len) # Example text inputs text1 = "Companies listed below reported quarterly profit substantially different from the average of analysts ' " \ "estimates . The companies are followed by at least three analysts , and had a minimum five-cent change in " \ "actual earnings per share . Estimated and actual results involving losses are omitted . The percent " \ "difference compares actual profit with the 30-day estimate where at least three analysts have issues " \ "forecasts in the past 30 days . Otherwise , actual profit is compared with the 300-day estimate . " \ "Source : Zacks Investment Research" text2 = "The companies are followed by at least three analysts , and had a minimum five-cent change in actual " \ "earnings per share . The percent difference compares actual profit with the 30-day estimate where at least " \ "three analysts have issues forecasts in the past 30 days . Otherwise , actual profit is compared with the " \ "300-day estimate . Source : Zacks Investment Research. Companies listed below reported quarterly profit " \ "substantially different from the average of analysts ' estimates . Estimated and actual results involving " \ "losses are omitted ." text1_tensor = preprocessor([text1]) text2_tensor = preprocessor([text2]) text1_score = model.get_main_score(text1_tensor["tokenized_texts"]).item() text2_score = model.get_main_score(text2_tensor["tokenized_texts"]).item() print(text1_score, text2_score) ``` # Training The training datasets can be retrieved from Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset. Please contact the authors to get the dataset if you have a valid LDC license. #### Training Results - **Training Time:** ~24 hours for ~46000 steps (batch size of 1) on a single A100 GPU - **Datasets:** Permuted dataset derived from Linguistic Data Consortium's (LDC) Wall Street Journal (WSJ) dataset. - **Training Config:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/config.json) # Model Parameters - **Model Weights:** [link](https://storage.googleapis.com/sgnlp/models/coherence_momentum/pytorch_model.bin) - **Model Inputs:** A paragraph of text. During training, each positive example can be paired with one or more negative examples. - **Model Outputs:** Coherence score for the input text. - **Model Size:** ~930MB - **Model Inference Info:** Not available. - **Usage Scenarios:** Essay scoring, summarization, language generation. # Other Information - **Original Code:** [link](https://github.com/ntunlp/coherence-paradigm)
cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['BattleZone-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,303
# (CleanRL) **PPO** Agent Playing **BattleZone-v5** This is a trained model of a PPO agent playing BattleZone-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BattleZone-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'BattleZone-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['BankHeist-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **BankHeist-v5** This is a trained model of a PPO agent playing BankHeist-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BankHeist-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BankHeist-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BankHeist-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'BankHeist-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['BattleZone-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,303
# (CleanRL) **PPO** Agent Playing **BattleZone-v5** This is a trained model of a PPO agent playing BattleZone-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BattleZone-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BattleZone-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BattleZone-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'BattleZone-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Mizuiro-sakura/luke-japanese-base-finetuned-jsts
Mizuiro-sakura
luke
13
10
transformers
0
text-classification
true
false
false
mit
['ja']
null
null
0
0
0
0
0
0
0
['luke', 'pytorch', 'transformers', 'jsts', 'stsb', 'sentence-similarity', 'SentenceSimilarity']
false
true
true
2,583
# このモデルはluke-japanese-baseをファインチューニングして、JSTS(文章の類似度計算)に用いれるようにしたものです。 このモデルはluke-japanese-baseを yahoo japan/JGLUEのJSTS( https://github.com/yahoojapan/JGLUE ) を用いてファインチューニングしたものです。 文章の類似度(5が最高値)を計算するタスクに用いることができます。 # This model is fine-tuned model for JSTS which is based on luke-japanese-base This model is fine-tuned by using yahoo japan JGLUE JSTS dataset. You could use this model for calculating sentence-similarity. # モデルの精度 accuracy of model モデルの精度は Pearson (ピアソンの積率相関係数) : 0.8971 # How to use 使い方 transformers, sentencepieceをinstallして、以下のコードを実行することで、jsts(文章の類似度計算)タスクを解かせることができます。 please execute this code. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import numpy as np tokenizer=AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-jsts') model=AutoModelForSequenceClassification.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-jsts') sentence1='今日は銀座に買い物に出かけた' sentence2='今日は銀座に服を買いに出かけた' token=tokenizer(sentence1,sentence2) import torch tensor_input_ids = torch.tensor(token["input_ids"]) tensor_attention_masks = torch.tensor(token["attention_mask"]) outputs=model(tensor_input_ids.unsqueeze(0), tensor_attention_masks.unsqueeze(0)) print(outputs.logits[0][1]*5) ``` # what is Luke? Lukeとは?[1] LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。 # Acknowledgments 謝辞 Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia. # Citation [1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Bowling-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Bowling-v5** This is a trained model of a PPO agent playing Bowling-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Bowling-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Bowling-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Berzerk-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Berzerk-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Berzerk-v5** This is a trained model of a PPO agent playing Berzerk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Berzerk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Berzerk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Berzerk-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Berzerk-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
pfunk/Pong-v4-DQPN_p500_e0.25-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,000
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p500_e0.25.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p500_e0.25]" python -m cleanrl_utils.enjoy --exp-name DQPN_p500_e0.25 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.25-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.25-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p500_e0.25-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p500_e0.25 --start-policy-f 500000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 1000, 'env_id': 'Pong-v4', 'evaluation_fraction': 0.25, 'exp_name': 'DQPN_p500_e0.25', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 500000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['BeamRider-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **BeamRider-v5** This is a trained model of a PPO agent playing BeamRider-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BeamRider-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'BeamRider-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Bowling-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Bowling-v5** This is a trained model of a PPO agent playing Bowling-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Bowling-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Bowling-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Bowling-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Bowling-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['BeamRider-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **BeamRider-v5** This is a trained model of a PPO agent playing BeamRider-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id BeamRider-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/BeamRider-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id BeamRider-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'BeamRider-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Boxing-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,271
# (CleanRL) **PPO** Agent Playing **Boxing-v5** This is a trained model of a PPO agent playing Boxing-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Boxing-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Boxing-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Boxing-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,271
# (CleanRL) **PPO** Agent Playing **Boxing-v5** This is a trained model of a PPO agent playing Boxing-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Boxing-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Boxing-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Boxing-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Boxing-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Breakout-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Breakout-v5** This is a trained model of a PPO agent playing Breakout-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Breakout-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Breakout-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Breakout-v5** This is a trained model of a PPO agent playing Breakout-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Breakout-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Breakout-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Okyx/NERTESTINGCAROLINE2
Okyx
bert
8
14
transformers
0
token-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,558
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # NERTESTINGCAROLINE2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0055 - Validation Loss: 0.0050 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10395, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.0833 | 0.0156 | 0 | | 0.0100 | 0.0060 | 1 | | 0.0055 | 0.0050 | 2 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
atorre/poca-SoccerTwos-30M
atorre
null
24
127
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
844
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: atorre/poca-SoccerTwos-30M 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
fathyshalab/massive_social-roberta-large-v1-3-7
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,460
# fathyshalab/massive_social-roberta-large-v1-3-7 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-3-7") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
aichina/ttz-470
aichina
null
27
3
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,034
### ttz_470 Dreambooth model trained by aichina with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: ttz (use that on your prompt) ![ttz 0](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%281%29.jpg)![ttz 1](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%282%29.jpg)![ttz 2](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%283%29.jpg)![ttz 3](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%284%29.jpg)![ttz 4](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%285%29.jpg)![ttz 5](https://huggingface.co/aichina/ttz-470/resolve/main/concept_images/ttz_%286%29.jpg)
fathyshalab/massive_transport-roberta-large-v1-3-3
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,466
# fathyshalab/massive_transport-roberta-large-v1-3-3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-3-3") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
jeremy8767/sd-class-butterflies-32
jeremy8767
null
2
0
diffusers
0
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
true
true
367
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('jeremy8767/sd-class-butterflies-32') image = pipeline().images[0] image ```
LuisaRomana/clasificador-muchocine
LuisaRomana
xlm-roberta
11
0
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['classification', 'generated_from_trainer']
true
true
true
1,323
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5297 - Accuracy: 0.3239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.5268 | 0.3239 | | 1.5534 | 2.0 | 776 | 1.5263 | 0.3239 | | 1.5313 | 3.0 | 1164 | 1.5297 | 0.3239 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Centipede-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Centipede-v5** This is a trained model of a PPO agent playing Centipede-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Centipede-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Centipede-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
fathyshalab/massive_calendar-roberta-large-v1-3-93
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,466
# fathyshalab/massive_calendar-roberta-large-v1-3-93 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-3-93") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
fathyshalab/massive_play-roberta-large-v1-3-71
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,458
# fathyshalab/massive_play-roberta-large-v1-3-71 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-3-71") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
lamducanhndgv/sst2-custom-setfit-model
lamducanhndgv
mpnet
13
4
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,414
# sst2-custom-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("sst2-custom-setfit-model") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Anorak/layoutlm-funsd
Anorak
layoutlm
13
0
transformers
0
token-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
9,210
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6857 - Answer: {'precision': 0.7176981541802389, 'recall': 0.8170580964153276, 'f1': 0.7641618497109827, 'number': 809} - Header: {'precision': 0.28368794326241137, 'recall': 0.33613445378151263, 'f1': 0.3076923076923077, 'number': 119} - Question: {'precision': 0.7773820124666073, 'recall': 0.819718309859155, 'f1': 0.7979890310786105, 'number': 1065} - Overall Precision: 0.7204 - Overall Recall: 0.7898 - Overall F1: 0.7535 - Overall Accuracy: 0.8139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.8064 | 1.0 | 10 | 1.6080 | {'precision': 0.020618556701030927, 'recall': 0.012360939431396786, 'f1': 0.01545595054095827, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2702127659574468, 'recall': 0.11924882629107982, 'f1': 0.16547231270358306, 'number': 1065} | 0.1435 | 0.0687 | 0.0929 | 0.3378 | | 1.4826 | 2.0 | 20 | 1.2520 | {'precision': 0.20166320166320167, 'recall': 0.23980222496909764, 'f1': 0.21908526256352345, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4309507286606523, 'recall': 0.5830985915492958, 'f1': 0.49561053471667993, 'number': 1065} | 0.3392 | 0.4089 | 0.3708 | 0.5993 | | 1.1438 | 3.0 | 30 | 0.9584 | {'precision': 0.463519313304721, 'recall': 0.5339925834363412, 'f1': 0.49626651349798967, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6199664429530202, 'recall': 0.6938967136150235, 'f1': 0.6548515728843598, 'number': 1065} | 0.5492 | 0.5876 | 0.5678 | 0.6897 | | 0.8546 | 4.0 | 40 | 0.7900 | {'precision': 0.5885714285714285, 'recall': 0.7639060568603214, 'f1': 0.6648735879505111, 'number': 809} | {'precision': 0.06666666666666667, 'recall': 0.025210084033613446, 'f1': 0.036585365853658534, 'number': 119} | {'precision': 0.6505823627287853, 'recall': 0.7342723004694836, 'f1': 0.6898985443317159, 'number': 1065} | 0.6108 | 0.7040 | 0.6541 | 0.7537 | | 0.6765 | 5.0 | 50 | 0.7144 | {'precision': 0.6514047866805411, 'recall': 0.7737948084054388, 'f1': 0.7073446327683616, 'number': 809} | {'precision': 0.09230769230769231, 'recall': 0.05042016806722689, 'f1': 0.06521739130434782, 'number': 119} | {'precision': 0.7019810508182601, 'recall': 0.7652582159624414, 'f1': 0.7322551662174304, 'number': 1065} | 0.6616 | 0.7260 | 0.6923 | 0.7773 | | 0.5613 | 6.0 | 60 | 0.6796 | {'precision': 0.6635514018691588, 'recall': 0.7898640296662547, 'f1': 0.7212189616252822, 'number': 809} | {'precision': 0.15306122448979592, 'recall': 0.12605042016806722, 'f1': 0.1382488479262673, 'number': 119} | {'precision': 0.7274320771253286, 'recall': 0.7793427230046949, 'f1': 0.7524932003626473, 'number': 1065} | 0.6739 | 0.7446 | 0.7075 | 0.7927 | | 0.4872 | 7.0 | 70 | 0.6554 | {'precision': 0.6592517694641051, 'recall': 0.8059332509270705, 'f1': 0.7252502780867631, 'number': 809} | {'precision': 0.22549019607843138, 'recall': 0.19327731092436976, 'f1': 0.20814479638009048, 'number': 119} | {'precision': 0.7383177570093458, 'recall': 0.815962441314554, 'f1': 0.775200713648528, 'number': 1065} | 0.6808 | 0.7747 | 0.7247 | 0.7997 | | 0.4334 | 8.0 | 80 | 0.6526 | {'precision': 0.6941176470588235, 'recall': 0.8022249690976514, 'f1': 0.7442660550458714, 'number': 809} | {'precision': 0.24545454545454545, 'recall': 0.226890756302521, 'f1': 0.23580786026200873, 'number': 119} | {'precision': 0.7493627867459643, 'recall': 0.828169014084507, 'f1': 0.7867975022301517, 'number': 1065} | 0.7012 | 0.7817 | 0.7393 | 0.8035 | | 0.3941 | 9.0 | 90 | 0.6694 | {'precision': 0.7048997772828508, 'recall': 0.7824474660074165, 'f1': 0.741652021089631, 'number': 809} | {'precision': 0.22099447513812154, 'recall': 0.33613445378151263, 'f1': 0.26666666666666666, 'number': 119} | {'precision': 0.7218984179850125, 'recall': 0.8140845070422535, 'f1': 0.76522506619594, 'number': 1065} | 0.6754 | 0.7727 | 0.7208 | 0.8007 | | 0.3556 | 10.0 | 100 | 0.6607 | {'precision': 0.694006309148265, 'recall': 0.8158220024721878, 'f1': 0.75, 'number': 809} | {'precision': 0.25, 'recall': 0.2773109243697479, 'f1': 0.26294820717131473, 'number': 119} | {'precision': 0.7846153846153846, 'recall': 0.8140845070422535, 'f1': 0.7990783410138248, 'number': 1065} | 0.7130 | 0.7827 | 0.7462 | 0.8068 | | 0.3245 | 11.0 | 110 | 0.6728 | {'precision': 0.6990595611285266, 'recall': 0.826946847960445, 'f1': 0.7576443941109853, 'number': 809} | {'precision': 0.2892561983471074, 'recall': 0.29411764705882354, 'f1': 0.2916666666666667, 'number': 119} | {'precision': 0.7817703768624014, 'recall': 0.8375586854460094, 'f1': 0.8087035358114233, 'number': 1065} | 0.7192 | 0.8008 | 0.7578 | 0.8089 | | 0.3113 | 12.0 | 120 | 0.6799 | {'precision': 0.71875, 'recall': 0.796044499381953, 'f1': 0.755425219941349, 'number': 809} | {'precision': 0.25903614457831325, 'recall': 0.36134453781512604, 'f1': 0.3017543859649123, 'number': 119} | {'precision': 0.775330396475771, 'recall': 0.8262910798122066, 'f1': 0.8, 'number': 1065} | 0.7132 | 0.7863 | 0.7480 | 0.8106 | | 0.2921 | 13.0 | 130 | 0.6836 | {'precision': 0.7070063694267515, 'recall': 0.823238566131026, 'f1': 0.7607081667618503, 'number': 809} | {'precision': 0.32432432432432434, 'recall': 0.3025210084033613, 'f1': 0.31304347826086953, 'number': 119} | {'precision': 0.7976513098464318, 'recall': 0.8291079812206573, 'f1': 0.8130755064456722, 'number': 1065} | 0.7338 | 0.7953 | 0.7633 | 0.8122 | | 0.2841 | 14.0 | 140 | 0.6848 | {'precision': 0.7150537634408602, 'recall': 0.8220024721878862, 'f1': 0.7648073605520415, 'number': 809} | {'precision': 0.26666666666666666, 'recall': 0.33613445378151263, 'f1': 0.2973977695167286, 'number': 119} | {'precision': 0.7841726618705036, 'recall': 0.8187793427230047, 'f1': 0.8011024345429489, 'number': 1065} | 0.7194 | 0.7913 | 0.7536 | 0.8127 | | 0.2793 | 15.0 | 150 | 0.6857 | {'precision': 0.7176981541802389, 'recall': 0.8170580964153276, 'f1': 0.7641618497109827, 'number': 809} | {'precision': 0.28368794326241137, 'recall': 0.33613445378151263, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7773820124666073, 'recall': 0.819718309859155, 'f1': 0.7979890310786105, 'number': 1065} | 0.7204 | 0.7898 | 0.7535 | 0.8139 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.8.0+cu101 - Tokenizers 0.13.2
cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Centipede-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Centipede-v5** This is a trained model of a PPO agent playing Centipede-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Centipede-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Centipede-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Centipede-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Centipede-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
bkhan2000/ppo-LunarLander-v2
bkhan2000
null
12
2
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['ChopperCommand-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,335
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5** This is a trained model of a PPO agent playing ChopperCommand-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id ChopperCommand-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'ChopperCommand-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
javiervela/poca-SoccerTwos
javiervela
null
24
126
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
844
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: javiervela/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CrazyClimber-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,319
# (CleanRL) **PPO** Agent Playing **CrazyClimber-v5** This is a trained model of a PPO agent playing CrazyClimber-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id CrazyClimber-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'CrazyClimber-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
lsaulier/poca-SoccerTwo
lsaulier
null
20
124
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
841
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: lsaulier/poca-SoccerTwo 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CrazyClimber-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,319
# (CleanRL) **PPO** Agent Playing **CrazyClimber-v5** This is a trained model of a PPO agent playing CrazyClimber-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id CrazyClimber-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/CrazyClimber-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id CrazyClimber-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'CrazyClimber-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Defender-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Defender-v5** This is a trained model of a PPO agent playing Defender-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Defender-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Defender-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Defender-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Defender-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Defender-v5** This is a trained model of a PPO agent playing Defender-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Defender-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Defender-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Defender-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Defender-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
AllanOuii/resnet50_mask_classification
AllanOuii
resnet
5
2
transformers
0
image-classification
true
false
false
null
null
['AllanOuii/autotrain-data-resnet50_mask_classification']
{'emissions': 1.5544780289204296}
0
0
0
0
0
0
0
['autotrain', 'vision', 'image-classification']
false
true
true
245
# Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3387392923 - CO2 Emissions (in grams): 1.5545 ## Validation Metrics - Loss: 0.138 - Accuracy: 0.977 - Precision: 0.958 - Recall: 1.000 - AUC: 0.996 - F1: 0.979
massimowww/poca-SoccerTwos
massimowww
null
33
0
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
844
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: massimowww/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Enduro-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,271
# (CleanRL) **PPO** Agent Playing **Enduro-v5** This is a trained model of a PPO agent playing Enduro-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Enduro-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Enduro-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Enduro-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Enduro-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['DemonAttack-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,311
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 1 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 1, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['DemonAttack-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,311
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['DoubleDunk-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,303
# (CleanRL) **PPO** Agent Playing **DoubleDunk-v5** This is a trained model of a PPO agent playing DoubleDunk-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id DoubleDunk-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DoubleDunk-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id DoubleDunk-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'DoubleDunk-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['DemonAttack-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,311
# (CleanRL) **PPO** Agent Playing **DemonAttack-v5** This is a trained model of a PPO agent playing DemonAttack-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id DemonAttack-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/DemonAttack-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id DemonAttack-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'DemonAttack-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
hirosay/xlm-roberta-base-finetuned-panx-de
hirosay
xlm-roberta
12
1
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1335 - F1: 0.8652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2566 | 1.0 | 525 | 0.1632 | 0.8292 | | 0.1276 | 2.0 | 1050 | 0.1340 | 0.8475 | | 0.0816 | 3.0 | 1575 | 0.1335 | 0.8652 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu116 - Datasets 2.8.0 - Tokenizers 0.12.1
Kevin-Z/q-FrozenLake-v1-4x4-noSlippery
Kevin-Z
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
396
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Kevin-Z/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fathyshalab/massive_social-roberta-large-v1-4-7
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,460
# fathyshalab/massive_social-roberta-large-v1-4-7 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_social-roberta-large-v1-4-7") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
alea31415/onimai-characters
alea31415
null
36
48
diffusers
6
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion', 'anime', 'aiart']
false
true
true
4,703
**This model is trained on 6(+1?) characters from ONIMAI: I'm Now Your Sister! (お兄ちゃんはおしまい!)** ### Example Generations ![00009-20230210181727-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00009-20230210181727-min.png) ![00041-20230210195115-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00041-20230210195115-min.png) ### Usage The model is shared in both diffuser and safetensors formats. As for the trigger words, the six characters can be prompted with `OyamaMahiro`, `OyamaMihari`, `HozukiKaede`, `HozukiMomiji`, `OkaAsahi`, and `MurosakiMiyo`. `TenkawaNayuta` is tagged but she appears in fewer than 10 images so don't expect any good result. There are also three different styles trained into the model: `aniscreen`, `edstyle`, and `megazine` (yes, typo). As usual you can get multiple-character imagee but starting from 4 it is difficult. By the way, the model is trained at clip skip 1. In the following images are shown the generations of different checkpoints. The default one is that of step 22828, but all the checkpoints starting from step 9969 can be found in the `checkpoints` directory. They are all sufficiently good at the six characters but later ones are better at `megazine` and `edstyle` (at the risk of overfitting, I don't really know). ![xyz_grid-0000-20230210154700.jpg](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/grids/xyz_grid-0000-20230210154700.jpg) ![xyz_grid-0001-20230210155723.jpg](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/grids/xyz_grid-0001-20230210155723.jpg) ![xyz_grid-0006-20230210163625.jpg](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/grids/xyz_grid-0006-20230210163625.jpg) ### More Generations ![00011-20230210182642-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00011-20230210182642-min.png) ![00003-20230210175009-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00003-20230210175009-min.png) ![00005-20230210175301-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00005-20230210175301-min.png) ![00016-20230210183918-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00016-20230210183918-min.png) ![00019-20230210184731-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00019-20230210184731-min.png) ![00038-20230210194326-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00038-20230210194326-min.png) ![00039-20230210194529-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00039-20230210194529-min.png) ![00043-20230210195945-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00043-20230210195945-min.png) ![00047-20230210202801-min.png](https://huggingface.co/alea31415/onimai-characters/resolve/main/example_generations/00047-20230210202801-min.png) ### Dataset Description The dataset is prepared via the workflow detailed here: https://github.com/cyber-meow/anime_screenshot_pipeline It contains 21412 images with the following composition - 2133 onimai images separated in four types - 1496 anime screenshots from the first six episodes (for style `aniscreen`) - 70 screenshots of the ending of the anime (for style `edstyle`, not counted in the 1496 above) - 528 fan arts (or probably some official arts) - 39 scans of the covers of the mangas (for style `megazine`, don't ask me why I choose this name, it is bad but it turns out to work) - 19279 regularization images which intend to be as various as possible while being in anime style (i.e. no photorealistic image is used) Note that the model is trained with a specific weighting scheme to balance between different concepts so that every image does not weight equally. After applying the per-image repeat we get around 145K images per epoch. ### Training Training is done with [EveryDream2](https://github.com/victorchall/EveryDream2trainer) trainer with [ACertainty](https://huggingface.co/JosephusCheung/ACertainty) as base model. The following configuration is used - resolution 512 - cosine learning rate scheduler, lr 2.5e-6 - batch size 8 - conditional dropout 0.08 - change beta scheduler from `scaler_linear` to `linear` in `config.json` of the scheduler of the model - clip skip 1 I trained for two epochs wheareas the default release model was trained for 22828 steps as mentioned above.
pittawat/Reinforce-pixel-copter
pittawat
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
fathyshalab/massive_transport-roberta-large-v1-4-3
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,466
# fathyshalab/massive_transport-roberta-large-v1-4-3 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_transport-roberta-large-v1-4-3") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Narsil/nllb
Narsil
m2m_100
9
31
transformers
0
translation
true
false
false
cc-by-nc-4.0
['ace', 'acm', 'acq', 'aeb', 'af', 'ajp', 'ak', 'als', 'am', 'apc', 'ar', 'ars', 'ary', 'arz', 'as', 'ast', 'awa', 'ayr', 'azb', 'azj', 'ba', 'bm', 'ban', 'be', 'bem', 'bn', 'bho', 'bjn', 'bo', 'bs', 'bug', 'bg', 'ca', 'ceb', 'cs', 'cjk', 'ckb', 'crh', 'cy', 'da', 'de', 'dik', 'dyu', 'dz', 'el', 'en', 'eo', 'et', 'eu', 'ee', 'fo', 'fj', 'fi', 'fon', 'fr', 'fur', 'fuv', 'gaz', 'gd', 'ga', 'gl', 'gn', 'gu', 'ht', 'ha', 'he', 'hi', 'hne', 'hr', 'hu', 'hy', 'ig', 'ilo', 'id', 'is', 'it', 'jv', 'ja', 'kab', 'kac', 'kam', 'kn', 'ks', 'ka', 'kk', 'kbp', 'kea', 'khk', 'km', 'ki', 'rw', 'ky', 'kmb', 'kmr', 'knc', 'kg', 'ko', 'lo', 'lij', 'li', 'ln', 'lt', 'lmo', 'ltg', 'lb', 'lua', 'lg', 'luo', 'lus', 'lvs', 'mag', 'mai', 'ml', 'mar', 'min', 'mk', 'mt', 'mni', 'mos', 'mi', 'my', 'nl', 'nn', 'nb', 'npi', 'nso', 'nus', 'ny', 'oc', 'ory', 'pag', 'pa', 'pap', 'pbt', 'pes', 'plt', 'pl', 'pt', 'prs', 'quy', 'ro', 'rn', 'ru', 'sg', 'sa', 'sat', 'scn', 'shn', 'si', 'sk', 'sl', 'sm', 'sn', 'sd', 'so', 'st', 'es', 'sc', 'sr', 'ss', 'su', 'sv', 'swh', 'szl', 'ta', 'taq', 'tt', 'te', 'tg', 'tl', 'th', 'ti', 'tpi', 'tn', 'ts', 'tk', 'tum', 'tr', 'tw', 'tzm', 'ug', 'uk', 'umb', 'ur', 'uzn', 'vec', 'vi', 'war', 'wo', 'xh', 'ydd', 'yo', 'yue', 'zh', 'zsm', 'zu']
['flores-200']
null
0
0
0
0
0
0
0
['nllb', 'translation']
false
true
true
4,401
# NLLB-200 This is the model card of NLLB-200's distilled 600M variant. Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper. - Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022 - License: CC-BY-NC - Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues ## Intended Use - Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data. - Primary intended users: Primary users are researchers and machine translation research community. - Out-of-scope use cases: NLLB-200 is a research model and is not released for production deployment. NLLB-200 is trained on general domain text data and is not intended to be used with domain specific texts, such as medical domain or legal domain. The model is not intended to be used for document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation. NLLB-200 translations can not be used as certified translations. ## Metrics • Model performance measures: NLLB-200 model was evaluated using BLEU, spBLEU, and chrF++ metrics widely adopted by machine translation community. Additionally, we performed human evaluation with the XSTS protocol and measured the toxicity of the generated translations. ## Evaluation Data - Datasets: Flores-200 dataset is described in Section 4 - Motivation: We used Flores-200 as it provides full evaluation coverage of the languages in NLLB-200 - Preprocessing: Sentence-split raw text data was preprocessed using SentencePiece. The SentencePiece model is released along with NLLB-200. ## Training Data • We used parallel multilingual data from a variety of sources to train the model. We provide detailed report on data selection and construction process in Section 5 in the paper. We also used monolingual data constructed from Common Crawl. We provide more details in Section 5.2. ## Ethical Considerations • In this work, we took a reflexive approach in technological development to ensure that we prioritize human users and minimize risks that could be transferred to them. While we reflect on our ethical considerations throughout the article, here are some additional points to highlight. For one, many languages chosen for this study are low-resource languages, with a heavy emphasis on African languages. While quality translation could improve education and information access in many in these communities, such an access could also make groups with lower levels of digital literacy more vulnerable to misinformation or online scams. The latter scenarios could arise if bad actors misappropriate our work for nefarious activities, which we conceive as an example of unintended use. Regarding data acquisition, the training data used for model development were mined from various publicly available sources on the web. Although we invested heavily in data cleaning, personally identifiable information may not be entirely eliminated. Finally, although we did our best to optimize for translation quality, mistranslations produced by the model could remain. Although the odds are low, this could have adverse impact on those who rely on these translations to make important decisions (particularly when related to health and safety). ## Caveats and Recommendations • Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments. ## Carbon Footprint Details • The carbon dioxide (CO2e) estimate is reported in Section 8.8.
fathyshalab/massive_calendar-roberta-large-v1-4-93
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,466
# fathyshalab/massive_calendar-roberta-large-v1-4-93 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-4-93") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
sinny/poca-SoccerTwos
sinny
null
64
127
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
839
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: sinny/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
tjsplace2001/AI-Tim-Talks-Finance
tjsplace2001
null
2
0
speechbrain
0
text2text-generation
false
false
false
creativeml-openrail-m
['en']
['fka/awesome-chatgpt-prompts']
null
0
0
0
0
0
0
0
['finance', 'legal']
false
true
true
4,906
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
fathyshalab/massive_play-roberta-large-v1-4-71
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,458
# fathyshalab/massive_play-roberta-large-v1-4-71 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_play-roberta-large-v1-4-71") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
oscarb92/ppo-SnowballTarget
oscarb92
null
20
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
false
true
true
855
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: oscarb92/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
andge/Reinforce-CartPole-v1
andge
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
pittawat/Reinforce-pixel-copter-2
pittawat
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
fathyshalab/massive_datetime-roberta-large-v1-4-94
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,466
# fathyshalab/massive_datetime-roberta-large-v1-4-94 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_datetime-roberta-large-v1-4-94") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
yizhangliu/poca-SoccerTwos-v7
yizhangliu
null
22
121
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
847
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v7 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
juro95/xlm-roberta-finetuned-ner-cased_1_ratio
juro95
xlm-roberta
8
2
transformers
0
token-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,489
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juro95/xlm-roberta-finetuned-ner-cased_1_ratio This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0633 - Validation Loss: 0.0940 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 14272, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3166 | 0.1533 | 0 | | 0.1376 | 0.1114 | 1 | | 0.0909 | 0.0988 | 2 | | 0.0633 | 0.0940 | 3 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.6.5 - Datasets 2.3.2 - Tokenizers 0.13.2
cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FishingDerby-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,319
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5** This is a trained model of a PPO agent playing FishingDerby-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id FishingDerby-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'FishingDerby-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
fathyshalab/massive_recommendation-roberta-large-v1-4-18
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,478
# fathyshalab/massive_recommendation-roberta-large-v1-4-18 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_recommendation-roberta-large-v1-4-18") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FishingDerby-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,319
# (CleanRL) **PPO** Agent Playing **FishingDerby-v5** This is a trained model of a PPO agent playing FishingDerby-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id FishingDerby-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/FishingDerby-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id FishingDerby-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'FishingDerby-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
vvn0/poca-SoccerTwos
vvn0
null
15
119
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
838
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: vvn0/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Freeway-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Freeway-v5** This is a trained model of a PPO agent playing Freeway-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Freeway-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Freeway-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,279
# (CleanRL) **PPO** Agent Playing **Freeway-v5** This is a trained model of a PPO agent playing Freeway-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Freeway-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
fathyshalab/massive_email-roberta-large-v1-4-38
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,460
# fathyshalab/massive_email-roberta-large-v1-4-38 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_email-roberta-large-v1-4-38") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
96harsh56/bert-large-cased-berta-finetuned-subjqa
96harsh56
bert
12
4
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
913
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-berta-finetuned-subjqa This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Frostbite-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Frostbite-v5** This is a trained model of a PPO agent playing Frostbite-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Frostbite-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Frostbite-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Frostbite-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Frostbite-v5** This is a trained model of a PPO agent playing Frostbite-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Frostbite-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Frostbite-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Frostbite-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
huynhdoo/camembert-base-finetuned-jva-missions-report
huynhdoo
camembert
20
15
transformers
0
text-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,656
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # huynhdoo/camembert-base-finetuned-jva-missions-report This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0542 - Train Accuracy: 0.9844 - Validation Loss: 0.5073 - Validation Accuracy: 0.8436 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1005, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.4753 | 0.7890 | 0.3616 | 0.8547 | 0 | | 0.3120 | 0.8799 | 0.3702 | 0.8492 | 1 | | 0.1824 | 0.9340 | 0.3928 | 0.8547 | 2 | | 0.0972 | 0.9714 | 0.4849 | 0.8436 | 3 | | 0.0542 | 0.9844 | 0.5073 | 0.8436 | 4 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Gopher-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,271
# (CleanRL) **PPO** Agent Playing **Gopher-v5** This is a trained model of a PPO agent playing Gopher-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gopher-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Gopher-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Gopher-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,271
# (CleanRL) **PPO** Agent Playing **Gopher-v5** This is a trained model of a PPO agent playing Gopher-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gopher-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Gopher-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Gravitar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Gravitar-v5** This is a trained model of a PPO agent playing Gravitar-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gravitar-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Gravitar-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Gravitar-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Gravitar-v5** This is a trained model of a PPO agent playing Gravitar-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gravitar-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Gravitar-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Gravitar-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Hero-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,255
# (CleanRL) **PPO** Agent Playing **Hero-v5** This is a trained model of a PPO agent playing Hero-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Hero-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Hero-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
oscarb92/ppo-Pyramids
oscarb92
null
16
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
false
true
true
831
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: oscarb92/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Hero-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,255
# (CleanRL) **PPO** Agent Playing **Hero-v5** This is a trained model of a PPO agent playing Hero-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Hero-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Hero-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Hero-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Hero-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Ransaka/pixelCopter
Ransaka
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
aichina/ttz-471
aichina
null
28
0
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,129
### ttz_471 Dreambooth model trained by aichina with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: ttz (use that on your prompt) ![ttz 0](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%281%29.jpg)![ttz 1](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%282%29.jpg)![ttz 2](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%283%29.jpg)![ttz 3](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%284%29.jpg)![ttz 4](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%285%29.jpg)![ttz 5](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%286%29.jpg)![ttz 6](https://huggingface.co/aichina/ttz-471/resolve/main/concept_images/ttz_%287%29.jpg)
huggingtweets/economiaitalia-eurospinitalia-mef_gov
huggingtweets
gpt2
11
0
transformers
0
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['huggingtweets']
false
true
true
3,742
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1409798485524758528/eKwtPOSO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/986636582873595904/QL74Q3nz_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/456030933955518464/MC-CiIo1_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Eurospin Italia & Economia Italia 🇮🇹 🇪🇺 & MEF</div> <div style="text-align: center; font-size: 14px;">@economiaitalia-eurospinitalia-mef_gov</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Eurospin Italia & Economia Italia 🇮🇹 🇪🇺 & MEF. | Data | Eurospin Italia | Economia Italia 🇮🇹 🇪🇺 | MEF | | --- | --- | --- | --- | | Tweets downloaded | 1575 | 3233 | 3197 | | Retweets | 137 | 294 | 1551 | | Short tweets | 64 | 219 | 13 | | Tweets kept | 1374 | 2720 | 1633 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yjwexh2t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @economiaitalia-eurospinitalia-mef_gov's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/11swqg7g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/11swqg7g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/economiaitalia-eurospinitalia-mef_gov') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
96harsh56/bert-large-cased-berta-finetuned-subjqa_1
96harsh56
bert
12
2
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
939
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-berta-finetuned-subjqa_1 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Amiko/dqn-SpaceInvaders
Amiko
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,208
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Amiko -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Amiko -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Amiko ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
nsaghatelyan/purple-skin-care
nsaghatelyan
null
19
0
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
430
### purple_skin_care Dreambooth model trained by nsaghatelyan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Goutham-Vignesh/flan-t5-gov-report-sum
Goutham-Vignesh
t5
20
13
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['govreport-summarization']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,808
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-gov-report-sum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the govreport-summarization dataset. It achieves the following results on the evaluation set: - Loss: 2.2385 - Rouge1: 5.8729 - Rouge2: 3.0763 - Rougel: 5.1016 - Rougelsum: 5.646 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.5801 | 1.0 | 2190 | 2.3211 | 5.6226 | 2.9142 | 4.9535 | 5.417 | 19.0 | | 2.5125 | 2.0 | 4380 | 2.2748 | 5.7982 | 3.0365 | 5.0726 | 5.5837 | 19.0 | | 2.453 | 3.0 | 6570 | 2.2545 | 5.8744 | 3.0997 | 5.1196 | 5.6524 | 19.0 | | 2.436 | 4.0 | 8760 | 2.2430 | 5.8669 | 3.0525 | 5.0849 | 5.631 | 19.0 | | 2.4144 | 5.0 | 10950 | 2.2385 | 5.8729 | 3.0763 | 5.1016 | 5.646 | 19.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu102 - Datasets 2.9.0 - Tokenizers 0.13.2
KarolK/xlm-roberta-base-finetuned-panx-de
KarolK
xlm-roberta
12
0
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,314
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1351 - F1: 0.8556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 263 | 0.1702 | 0.8117 | | 0.213 | 2.0 | 526 | 0.1401 | 0.8349 | | 0.213 | 3.0 | 789 | 0.1351 | 0.8556 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
juro95/xlm-roberta-finetuned-ner-special_characters_cased_0.4
juro95
xlm-roberta
8
4
transformers
0
token-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,504
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juro95/xlm-roberta-finetuned-ner-special_characters_cased_0.4 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0226 - Validation Loss: 0.0434 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 75928, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1412 | 0.0720 | 0 | | 0.0604 | 0.0577 | 1 | | 0.0363 | 0.0442 | 2 | | 0.0226 | 0.0434 | 3 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.6.5 - Datasets 2.3.2 - Tokenizers 0.13.2
cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['IceHockey-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **IceHockey-v5** This is a trained model of a PPO agent playing IceHockey-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id IceHockey-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id IceHockey-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'IceHockey-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['IceHockey-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **IceHockey-v5** This is a trained model of a PPO agent playing IceHockey-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id IceHockey-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/IceHockey-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id IceHockey-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'IceHockey-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
nlpie/clinical-mobilebert
nlpie
mobilebert
8
2
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,086
# Model Description ClinicalMobileBERT is the result of training the [BioMobileBERT](https://huggingface.co/google/nlpie/bio-mobilebert) model in a continual learning scenario for 3 epochs using a total batch size of 192 on the MIMIC-III notes dataset. # Initialisation We initialise our model with the pre-trained checkpoints of the [BioMobileBERT](https://huggingface.co/google/nlpie/bio-mobilebert) model available on Huggingface. # Architecture MobileBERT uses a 128-dimensional embedding layer followed by 1D convolutions to up-project its output to the desired hidden dimension expected by the transformer blocks. For each of these blocks, MobileBERT uses linear down-projection at the beginning of the transformer block and up-projection at its end, followed by a residual connection originating from the input of the block before down-projection. Because of these linear projections, MobileBERT can reduce the hidden size and hence the computational cost of multi-head attention and feed-forward blocks. This model additionally incorporates up to four feed-forward blocks in order to enhance its representation learning capabilities. Thanks to the strategically placed linear projections, a 24-layer MobileBERT (which is used in this work) has around 25M parameters. # Citation If you use this model, please consider citing the following paper: ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.04725, doi = {10.48550/ARXIV.2302.04725}, url = {https://arxiv.org/abs/2302.04725}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Group, ISARIC Clinical Characterisation and Clifton, Lei and Merson, Laura and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50}, title = {Lightweight Transformers for Clinical Natural Language Processing}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
nlpie/tiny-clinicalbert
nlpie
bert
9
2
transformers
0
fill-mask
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,539
# Model Description TinyClinicalBERT is a distilled version of the [BioClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) which is distilled for 3 epochs using a total batch size of 192 on the MIMIC-III notes dataset. # Distillation Procedure This model uses a unique distillation method called ‘transformer-layer distillation’ which is applied on each layer of the student to align the attention maps and the hidden states of the student with those of the teacher. # Architecture and Initialisation This model uses 4 hidden layers with a hidden dimension size and an embedding size of 768 resulting in a total of 15M parameters. Due to the model's small hidden dimension size, it uses random initialisation. # Citation If you use this model, please consider citing the following paper: ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.04725, doi = {10.48550/ARXIV.2302.04725}, url = {https://arxiv.org/abs/2302.04725}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Group, ISARIC Clinical Characterisation and Clifton, Lei and Merson, Laura and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50}, title = {Lightweight Transformers for Clinical Natural Language Processing}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Jamesbond-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5** This is a trained model of a PPO agent playing Jamesbond-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Jamesbond-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Jamesbond-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
nlpie/clinical-miniALBERT-312
nlpie
bert
8
2
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,552
# Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the BioClinicalBERT model as the teacher. This model is trained for 3 epochs on the MIMIC-III notes dataset. In terms of architecture, this model uses an embedding dimension of 312, a hidden size of 768, an MLP expansion rate of 4, and a reduction factor of 16 for bottleneck adapters. In general, this model uses 6 recursions and has a unique parameter count of 18 million parameters. # Usage Since miniALBERT uses a unique architecture it can not be loaded using ts.AutoModel for now. To load the model, first, clone the miniALBERT GitHub project, using the below code: ```bash git clone https://github.com/nlpie-research/MiniALBERT.git ``` Then use the ```sys.path.append``` to add the miniALBERT files to your project and then import the miniALBERT modeling file using the below code: ```bash import sys sys.path.append("PATH_TO_CLONED_PROJECT/MiniALBERT/") from minialbert_modeling import MiniAlbertForSequenceClassification, MiniAlbertForTokenClassification ``` Finally, load the model like a regular model in the transformers library using the below code: ```Python # For NER use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/clinical-miniALBERT-312") # For Sequence Classification use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/clinical-miniALBERT-312") ``` In addition, For efficient fine-tuning using the pre-trained bottleneck adapters use the below code: ```Python model.trainAdaptersOnly() ``` # Citation If you use the model, please cite our paper: ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.04725, doi = {10.48550/ARXIV.2302.04725}, url = {https://arxiv.org/abs/2302.04725}, author = {Rohanian, Omid and Nouriborji, Mohammadmahdi and Jauncey, Hannah and Kouchaki, Samaneh and Group, ISARIC Clinical Characterisation and Clifton, Lei and Merson, Laura and Clifton, David A.}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50}, title = {Lightweight Transformers for Clinical Natural Language Processing}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
asuzuki/Pixelcopter-PLE-v0
asuzuki
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
frangiral/dqn-SpaceInvadersNoFrameskip-v4-best
frangiral
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,221
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga frangiral -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga frangiral ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.025), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0003), ('learning_starts', 100000), ('n_timesteps', 500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Jamesbond-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,295
# (CleanRL) **PPO** Agent Playing **Jamesbond-v5** This is a trained model of a PPO agent playing Jamesbond-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Jamesbond-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Jamesbond-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Jamesbond-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Jamesbond-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
cleanrl/Kangaroo-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2
cleanrl
null
9
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Kangaroo-v5', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,287
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5** This is a trained model of a PPO agent playing Kangaroo-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Kangaroo-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Kangaroo-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```