modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
ArashEsk95/bert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-10T09:16:23Z
--- tags: - Freeway-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Freeway-v5 type: Freeway-v5 metrics: - type: mean_reward value: 33.90 +/- 0.30 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Freeway-v5** This is a trained model of a PPO agent playing Freeway-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Freeway-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
ArashEsk95/bert-base-uncased-finetuned-stsb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-10T09:17:37Z
--- tags: - Freeway-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Freeway-v5 type: Freeway-v5 metrics: - type: mean_reward value: 32.40 +/- 0.66 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Freeway-v5** This is a trained model of a PPO agent playing Freeway-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Freeway-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Freeway-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Freeway-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
AriakimTaiyo/kumiko
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Gopher-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Gopher-v5 type: Gopher-v5 metrics: - type: mean_reward value: 1180.00 +/- 786.08 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Gopher-v5** This is a trained model of a PPO agent playing Gopher-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Gopher-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Gopher-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Gopher-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Gopher-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Ateeb/EmotionDetector
[ "pytorch", "funnel", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "FunnelForSequenceClassification" ], "model_type": "funnel", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2023-02-10T10:49:20Z
--- tags: - Krull-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Krull-v5 type: Krull-v5 metrics: - type: mean_reward value: 9311.00 +/- 1402.73 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Krull-v5** This is a trained model of a PPO agent playing Krull-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Krull-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Krull-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Krull-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Krull-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Atlasky/Turkish-Negator
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-10T10:55:09Z
--- tags: - KungFuMaster-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: KungFuMaster-v5 type: KungFuMaster-v5 metrics: - type: mean_reward value: 38820.00 +/- 8075.12 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **KungFuMaster-v5** This is a trained model of a PPO agent playing KungFuMaster-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id KungFuMaster-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'KungFuMaster-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Augustvember/WokkaBot9
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-02-10T11:19:24Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-dataset-mentions co2_eq_emissions: emissions: 5.847890400310104 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3390592974 - CO2 Emissions (in grams): 5.8479 ## Validation Metrics - Loss: 0.018 - Accuracy: 0.995 - Precision: 0.998 - Recall: 0.992 - AUC: 1.000 - F1: 0.995 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-dataset-mentions-3390592974 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592974", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592974", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ayham/albert_gpt2_Full_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: TestZee/t5-base-finetuned-question-generation-data-t5-base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TestZee/t5-base-finetuned-question-generation-data-t5-base This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0855 - Validation Loss: 4.4354 - Train Rouge1: 27.4892 - Train Rouge2: 8.6370 - Train Rougel: 24.3146 - Train Rougelsum: 24.3146 - Train Gen Len: 19.0 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 4.0855 | 4.4354 | 27.4892 | 8.6370 | 24.3146 | 24.3146 | 19.0 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
Ayham/roberta_gpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/massive_iot-roberta-large-v1-5-5 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_iot-roberta-large-v1-5-5") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Ayham/xlmroberta_large_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/massive_audio-roberta-large-v1-5-0 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_audio-roberta-large-v1-5-0") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Ayoola/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/massive_takeaway-roberta-large-v1-5-88 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_takeaway-roberta-large-v1-5-88") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Azaghast/GPT2-SCP-Miscellaneous
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.18 +/- 20.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Azizun/Geotrend-10-epochs
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/massive_alarm-roberta-large-v1-5-50 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_alarm-roberta-large-v1-5-50") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BSen/wav2vec2-large-xls-r-300m-turkish-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-v1-3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 21.40 +/- 18.57 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Barleysack/AERoberta2
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-02-10T14:35:43Z
--- license: mit tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification language: - multilingual --- --- You can read more about the importance and practical use of this model in this article: [Mental Health Monitor](https://medium.com/@uaritm/mental-health-monitor-b90f0b2ee7f6) # test_depres This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("test_depres") dict ={0:"positive", 1:"negative"} # Run inference preds = model(["What happened to me? I don't know what to do, where to go! Can anyone help me?"]) print(dict.get(preds.numpy()[0])) ``` ``` Warning: This model cannot be used for medical diagnosis and is not a substitute for a physician! ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ## Citing & Authors ``` @misc{Uaritm, title={SetFit: Classification of medical texts}, author={Vitaliy Ostashko}, year={2023}, url={https://esemi.org} } <!--- Describe where people can find more information -->
Batsy24/DialoGPT-small-Twilight_EdBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - SpaceInvaders-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvaders-v5 type: SpaceInvaders-v5 metrics: - type: mean_reward value: 43485.00 +/- 22222.98 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **SpaceInvaders-v5** This is a trained model of a PPO agent playing SpaceInvaders-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id SpaceInvaders-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/SpaceInvaders-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id SpaceInvaders-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'SpaceInvaders-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Battlehooks/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO MlpPolicy results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 259.69 +/- 21.02 name: mean_reward verified: false --- # **PPO MlpPolicy** Agent playing **LunarLander-v2** This is a trained model of a **PPO MlpPolicy** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BatuhanYilmaz/bert-finetuned-mrpc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - StarGunner-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: StarGunner-v5 type: StarGunner-v5 metrics: - type: mean_reward value: 272070.00 +/- 21288.36 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **StarGunner-v5** This is a trained model of a PPO agent playing StarGunner-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id StarGunner-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'StarGunner-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
BatuhanYilmaz/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - StarGunner-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: StarGunner-v5 type: StarGunner-v5 metrics: - type: mean_reward value: 260530.00 +/- 14312.31 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **StarGunner-v5** This is a trained model of a PPO agent playing StarGunner-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id StarGunner-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/StarGunner-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id StarGunner-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'StarGunner-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.9345 - name: F1 type: f1 value: 0.9345835158374045 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1468 - Accuracy: 0.9345 - F1: 0.9346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1695 | 1.0 | 250 | 0.1757 | 0.93 | 0.9298 | | 0.107 | 2.0 | 500 | 0.1468 | 0.9345 | 0.9346 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.11.0 - Datasets 2.9.0 - Tokenizers 0.10.3
BatuhanYilmaz/mlm-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-4.0 language: - en library_name: sentence-transformers pipeline_tag: sentence-similarity ---
Baybars/debateGPT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Tennis-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tennis-v5 type: Tennis-v5 metrics: - type: mean_reward value: -1.00 +/- 0.45 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tennis-v5** This is a trained model of a PPO agent playing Tennis-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Tennis-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Tennis-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Baybars/wav2vec2-xls-r-300m-cv8-turkish
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Tennis-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tennis-v5 type: Tennis-v5 metrics: - type: mean_reward value: -0.90 +/- 0.30 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tennis-v5** This is a trained model of a PPO agent playing Tennis-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Tennis-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tennis-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Tennis-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Tennis-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
BeIR/query-gen-msmarco-t5-large-v1
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,225
null
--- tags: - TimePilot-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: TimePilot-v5 type: TimePilot-v5 metrics: - type: mean_reward value: 52560.00 +/- 14611.86 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **TimePilot-v5** This is a trained model of a PPO agent playing TimePilot-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id TimePilot-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/TimePilot-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id TimePilot-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'TimePilot-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Beelow/model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 language: - tr metrics: - accuracy library_name: transformers pipeline_tag: text-classification widget: - text: >- HATAY DEFNE İLÇESİNE yardımlar gitmiyor Özellikle çadıra battaniyeye yiyeceğe ihtiyaç var. Antakyanın dışında olduğu için tüm yardimlar İSKENDERUNA ANTAKYAYA gidiyor. Bu bölgeye gitmiyor..DEFNE İLÇESİNE GİDECEK ERZAK Çadır yardımlarını example_title: Örnek --- **Train-Test Set:** "intent-multilabel-v1-2.zip" **Model:** "dbmdz/bert-base-turkish-cased" ## Tokenizer Params ``` max_length=128 padding="max_length" truncation=True ``` ## Training Params ``` evaluation_strategy = "epoch" save_strategy = "epoch" per_device_train_batch_size = 16 per_device_eval_batch_size = 16 num_train_epochs = 4 load_best_model_at_end = True ``` ## Train-Val Splitting Configuration ``` train_test_split(df_train, test_size=0.1, random_state=1111) ``` ## Training Log ``` Epoch Training Loss Validation Loss 1 No log 0.150276 2 0.195100 0.132906 3 0.107700 0.128633 4 0.107700 0.127795 ``` ## Threshold Optimization - **Best Threshold:** 0.1 - **F1 @ Threshold:** 0.734 ## Eval Results ``` precision recall f1-score support Alakasiz 0.90 0.87 0.89 734 Barinma 0.85 0.80 0.83 207 Elektronik 0.73 0.78 0.75 130 Giysi 0.83 0.66 0.73 94 Kurtarma 0.86 0.79 0.82 362 Lojistik 0.73 0.51 0.60 112 Saglik 0.74 0.74 0.74 108 Su 0.64 0.60 0.62 78 Yagma 0.68 0.55 0.61 31 Yemek 0.80 0.83 0.81 117 micro avg 0.84 0.79 0.81 1973 macro avg 0.78 0.71 0.74 1973 weighted avg 0.84 0.79 0.81 1973 samples avg 0.84 0.82 0.82 1973 ```
Beelow/wav2vec2-ukrainian-model-large
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5337954939341422 - name: Recall type: recall value: 0.28544949026876737 - name: F1 type: f1 value: 0.37198067632850246 - name: Accuracy type: accuracy value: 0.9402761745970672 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2793 - Precision: 0.5338 - Recall: 0.2854 - F1: 0.3720 - Accuracy: 0.9403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2933 | 0.3959 | 0.1798 | 0.2473 | 0.9347 | | No log | 2.0 | 426 | 0.2793 | 0.5338 | 0.2854 | 0.3720 | 0.9403 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Benicio/t5-small-finetuned-en-to-ro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 219.19 +/- 89.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Berzemu/Coco
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Tutankham-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Tutankham-v5 type: Tutankham-v5 metrics: - type: mean_reward value: 253.90 +/- 9.66 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Tutankham-v5** This is a trained model of a PPO agent playing Tutankham-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Tutankham-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Tutankham-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Tutankham-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Tutankham-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - Venture-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Venture-v5 type: Venture-v5 metrics: - type: mean_reward value: 200.00 +/- 189.74 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **Venture-v5** This is a trained model of a PPO agent playing Venture-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id Venture-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Venture-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id Venture-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'Venture-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi3-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer model-index: - name: git-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-bert This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
Bharathdamu/wav2vec2-model-hindi-stt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: bonadio/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bia18/Beatriz
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - WizardOfWor-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: WizardOfWor-v5 type: WizardOfWor-v5 metrics: - type: mean_reward value: 19270.00 +/- 8651.94 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **WizardOfWor-v5** This is a trained model of a PPO agent playing WizardOfWor-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id WizardOfWor-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/WizardOfWor-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed3/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id WizardOfWor-v5 --seed 3 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'WizardOfWor-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 3, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
BigSalmon/FroBurta
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - stereoset metrics: - accuracy model-index: - name: roberta-large_stereoset_finetuned results: - task: name: Text Classification type: text-classification dataset: name: stereoset type: stereoset config: intersentence split: validation args: intersentence metrics: - name: Accuracy type: accuracy value: 0.8335949764521193 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_stereoset_finetuned This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 0.7989 - Accuracy: 0.8336 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 0.6920 | 0.5196 | | No log | 0.42 | 10 | 0.6909 | 0.5290 | | No log | 0.62 | 15 | 0.6899 | 0.5220 | | No log | 0.83 | 20 | 0.6883 | 0.5408 | | No log | 1.04 | 25 | 0.6573 | 0.6609 | | No log | 1.25 | 30 | 0.5892 | 0.7088 | | No log | 1.46 | 35 | 0.6633 | 0.5408 | | No log | 1.67 | 40 | 0.6322 | 0.6852 | | No log | 1.88 | 45 | 0.6393 | 0.7159 | | No log | 2.08 | 50 | 0.5494 | 0.7410 | | No log | 2.29 | 55 | 0.5498 | 0.7386 | | No log | 2.5 | 60 | 0.5069 | 0.7692 | | No log | 2.71 | 65 | 0.4930 | 0.7630 | | No log | 2.92 | 70 | 0.4939 | 0.7614 | | No log | 3.12 | 75 | 0.5379 | 0.7724 | | No log | 3.33 | 80 | 0.5981 | 0.7732 | | No log | 3.54 | 85 | 0.5842 | 0.7716 | | No log | 3.75 | 90 | 0.4405 | 0.8030 | | No log | 3.96 | 95 | 0.4970 | 0.7951 | | No log | 4.17 | 100 | 0.5172 | 0.8093 | | No log | 4.38 | 105 | 0.5052 | 0.8108 | | No log | 4.58 | 110 | 0.4685 | 0.8085 | | No log | 4.79 | 115 | 0.4663 | 0.8218 | | No log | 5.0 | 120 | 0.5086 | 0.8218 | | No log | 5.21 | 125 | 0.5096 | 0.8179 | | No log | 5.42 | 130 | 0.5705 | 0.8203 | | No log | 5.62 | 135 | 0.5294 | 0.8312 | | No log | 5.83 | 140 | 0.4377 | 0.8375 | | No log | 6.04 | 145 | 0.5699 | 0.8100 | | No log | 6.25 | 150 | 0.6062 | 0.8265 | | No log | 6.46 | 155 | 0.7237 | 0.8218 | | No log | 6.67 | 160 | 0.6816 | 0.8210 | | No log | 6.88 | 165 | 0.6413 | 0.8124 | | No log | 7.08 | 170 | 0.5931 | 0.8359 | | No log | 7.29 | 175 | 0.6149 | 0.8399 | | No log | 7.5 | 180 | 0.7190 | 0.8195 | | No log | 7.71 | 185 | 0.7339 | 0.8352 | | No log | 7.92 | 190 | 0.7244 | 0.8352 | | No log | 8.12 | 195 | 0.7722 | 0.8203 | | No log | 8.33 | 200 | 0.6890 | 0.8344 | | No log | 8.54 | 205 | 0.6938 | 0.8336 | | No log | 8.75 | 210 | 0.7234 | 0.8320 | | No log | 8.96 | 215 | 0.7517 | 0.8391 | | No log | 9.17 | 220 | 0.7713 | 0.8383 | | No log | 9.38 | 225 | 0.7745 | 0.8375 | | No log | 9.58 | 230 | 0.8006 | 0.8375 | | No log | 9.79 | 235 | 0.8003 | 0.8367 | | No log | 10.0 | 240 | 0.7989 | 0.8336 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/GPT2HardArticleEasyArticle
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - stereoset metrics: - accuracy model-index: - name: bert-large-uncased_stereoset_finetuned results: - task: name: Text Classification type: text-classification dataset: name: stereoset type: stereoset config: intersentence split: validation args: intersentence metrics: - name: Accuracy type: accuracy value: 0.771585557299843 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased_stereoset_finetuned This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 1.0729 - Accuracy: 0.7716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 0.6925 | 0.5071 | | No log | 0.42 | 10 | 0.6978 | 0.5008 | | No log | 0.62 | 15 | 0.6891 | 0.5275 | | No log | 0.83 | 20 | 0.6850 | 0.5487 | | No log | 1.04 | 25 | 0.7521 | 0.5126 | | No log | 1.25 | 30 | 0.6577 | 0.6177 | | No log | 1.46 | 35 | 0.6759 | 0.5440 | | No log | 1.67 | 40 | 0.6395 | 0.6405 | | No log | 1.88 | 45 | 0.6064 | 0.6719 | | No log | 2.08 | 50 | 0.5822 | 0.6986 | | No log | 2.29 | 55 | 0.5566 | 0.7096 | | No log | 2.5 | 60 | 0.5411 | 0.7331 | | No log | 2.71 | 65 | 0.5448 | 0.7551 | | No log | 2.92 | 70 | 0.5384 | 0.7339 | | No log | 3.12 | 75 | 0.5487 | 0.7535 | | No log | 3.33 | 80 | 0.5572 | 0.7567 | | No log | 3.54 | 85 | 0.5763 | 0.7614 | | No log | 3.75 | 90 | 0.5756 | 0.7645 | | No log | 3.96 | 95 | 0.5524 | 0.7645 | | No log | 4.17 | 100 | 0.6320 | 0.7614 | | No log | 4.38 | 105 | 0.6512 | 0.7575 | | No log | 4.58 | 110 | 0.6582 | 0.7606 | | No log | 4.79 | 115 | 0.6731 | 0.7669 | | No log | 5.0 | 120 | 0.6944 | 0.7575 | | No log | 5.21 | 125 | 0.7142 | 0.7575 | | No log | 5.42 | 130 | 0.7004 | 0.7645 | | No log | 5.62 | 135 | 0.6794 | 0.7630 | | No log | 5.83 | 140 | 0.7108 | 0.7606 | | No log | 6.04 | 145 | 0.7730 | 0.7590 | | No log | 6.25 | 150 | 0.8083 | 0.7614 | | No log | 6.46 | 155 | 0.8361 | 0.7653 | | No log | 6.67 | 160 | 0.8498 | 0.7692 | | No log | 6.88 | 165 | 0.8769 | 0.7700 | | No log | 7.08 | 170 | 0.8324 | 0.7582 | | No log | 7.29 | 175 | 0.7945 | 0.7645 | | No log | 7.5 | 180 | 0.8480 | 0.7684 | | No log | 7.71 | 185 | 0.8905 | 0.7724 | | No log | 7.92 | 190 | 0.9560 | 0.7700 | | No log | 8.12 | 195 | 0.9976 | 0.7669 | | No log | 8.33 | 200 | 1.0315 | 0.7677 | | No log | 8.54 | 205 | 1.0413 | 0.7692 | | No log | 8.75 | 210 | 1.0216 | 0.7708 | | No log | 8.96 | 215 | 1.0251 | 0.7716 | | No log | 9.17 | 220 | 1.0483 | 0.7716 | | No log | 9.38 | 225 | 1.0616 | 0.7716 | | No log | 9.58 | 230 | 1.0703 | 0.7708 | | No log | 9.79 | 235 | 1.0731 | 0.7732 | | No log | 10.0 | 240 | 1.0729 | 0.7716 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: vaccinated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vaccinated This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.9036 - F1: 0.9048 - Recall: 0.8636 - Precision: 0.95 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BigSalmon/GPTIntro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer model-index: - name: another_simplificator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # another_simplificator This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Tokenizers 0.13.2
BigSalmon/GPTNeo350MInformalToFormalLincoln
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-02-10T16:47:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.50 +/- 13.34 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BigSalmon/GPTNeo350MInformalToFormalLincoln2
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - ChopperCommand-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ChopperCommand-v5 type: ChopperCommand-v5 metrics: - type: mean_reward value: 38660.00 +/- 32345.67 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **ChopperCommand-v5** This is a trained model of a PPO agent playing ChopperCommand-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool_impala_atari_wrapper.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool_impala_atari_wrapper --env-id ChopperCommand-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/sebulba_ppo_envpool_impala_atari_wrapper.py curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-sebulba_ppo_envpool_impala_atari_wrapper-seed2/raw/main/poetry.lock poetry install --all-extras python sebulba_ppo_envpool_impala_atari_wrapper.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 5 6 --track --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 7680, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'ent_coef': 0.01, 'env_id': 'ChopperCommand-v5', 'exp_name': 'sebulba_ppo_envpool_impala_atari_wrapper', 'gae_lambda': 0.95, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learner_device_ids': [1, 2, 3, 4, 5, 6], 'learning_rate': 0.00025, 'max_grad_norm': 0.5, 'minibatch_size': 1920, 'norm_adv': True, 'num_actor_threads': 1, 'num_envs': 60, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 6510, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
BigSalmon/GPTNeo350MInformalToFormalLincoln4
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: openrail datasets: - food101 language: - en metrics: - accuracy library_name: keras pipeline_tag: image-classification ---
BigSalmon/GoodMaskResults
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_social-roberta-large-v1-5-7 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_social-roberta-large-v1-5-7") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/InformalToFormalLincoln14
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### elireyes Dreambooth model trained by lolo503 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
BigSalmon/InformalToFormalLincoln16
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_transport-roberta-large-v1-5-4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_transport-roberta-large-v1-5-4") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/InformalToFormalLincoln20
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_calendar-roberta-large-v1-5-93 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_calendar-roberta-large-v1-5-93") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/InformalToFormalLincoln22
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2023-02-10T17:23:36Z
--- pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 --- This is a [COMET](https://github.com/Unbabel/COMET) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference. # Paper [COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task](https://aclanthology.org/2022.wmt-1.52) (Rei et al., WMT 2022) # License Apache-2.0 # Usage (unbabel-comet) Using this model requires unbabel-comet to be installed: ```bash pip install --upgrade pip # ensures that pip is current pip install unbabel-comet ``` Then you can use it through comet CLI: ```bash comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/wmt22-comet-da ``` Or using Python: ```python from comet import download_model, load_from_checkpoint model_path = download_model("Unbabel/wmt22-comet-da") model = load_from_checkpoint(model_path) data = [ { "src": "Dem Feuer konnte Einhalt geboten werden", "mt": "The fire could be stopped", "ref": "They were able to control the fire." }, { "src": "Schulen und Kindergärten wurden eröffnet.", "mt": "Schools and kindergartens were open", "ref": "Schools and kindergartens opened" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ``` # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
BigSalmon/InformalToFormalLincoln23
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_play-roberta-large-v1-5-71 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_play-roberta-large-v1-5-71") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_datetime-roberta-large-v1-5-94 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_datetime-roberta-large-v1-5-94") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/MrLincoln12
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for a model trained based on the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class), not using accelarate yet. This model is a diffusion model for unconditional image generation of cute but small 🦋. The model was trained with 1000 images using the [DDPM](https://arxiv.org/abs/2006.11239) architecture. Images generated are of 64x64 pixel size. The model was trained for 50 epochs with a batch size of 64, using around 11 GB of GPU memory. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained(Likalto4/Unconditional_Butterflies_x64) image = pipeline().images[0] image ```
BigSalmon/T5F
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- tags: - generated_from_trainer model-index: - name: se-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # se-bert This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2
BigSalmon/TS3
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_lists-roberta-large-v1-5-93 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_lists-roberta-large-v1-5-93") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BigSalmon/prepositions
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 243.22 +/- 15.02 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BigTooth/DialoGPT-small-tohru
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- pipeline_tag: translation language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: apache-2.0 tags: - arXiv:2010.15535 - PyTorch --- This is a [COMET](https://github.com/Unbabel/COMET) evaluation model: It receives a triplet with (source sentence, translation, reference translation) and returns a score that reflects the quality of the translation compared to both source and reference. **NOTE:** This model was recently replaced by an improved version [wmt22-comet-da](https://huggingface.co/Unbabel/wmt22-comet-da) # Paper [Unbabel’s Participation in the WMT20 Metrics Shared Task](https://aclanthology.org/2020.wmt-1.101) (Rei et al., WMT 2020) # License Apache-2.0 # Usage (unbabel-comet) Using this model requires unbabel-comet to be installed: ```bash pip install --upgrade pip # ensures that pip is current pip install unbabel-comet ``` Then you can use it through comet CLI: ```bash comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model Unbabel/wmt22-comet-da ``` Or using Python: ```python from comet import download_model, load_from_checkpoint model_path = download_model("Unbabel/wmt20-comet-da") model = load_from_checkpoint(model_path) data = [ { "src": "Dem Feuer konnte Einhalt geboten werden", "mt": "The fire could be stopped", "ref": "They were able to control the fire." }, { "src": "Schulen und Kindergärten wurden eröffnet.", "mt": "Schools and kindergartens were open", "ref": "Schools and kindergartens opened" } ] model_output = model.predict(data, batch_size=8, gpus=1) print (model_output) ``` # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source sentence, translation, reference translation) outputs a single score. This score is unbounded but typically falls between -1 and 1 where 1 reflects a perfect translation. # Languages Covered: This model builds on top of XLM-R which cover the following languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish. Thus, results for language pairs containing uncovered languages are unreliable!
BigeS/DialoGPT-small-Rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_qa-roberta-large-v1-5-73") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BinksSachary/ShaxxBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: atorre/poca-SoccerTwos-40M 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8645329998294582 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1355 - F1: 0.8645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2582 | 1.0 | 525 | 0.1612 | 0.8199 | | 0.128 | 2.0 | 1050 | 0.1334 | 0.8484 | | 0.081 | 3.0 | 1575 | 0.1355 | 0.8645 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3998 - Accuracy: 0.4516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3705 | 0.3781 | | 1.3868 | 2.0 | 776 | 1.2408 | 0.4323 | | 1.0102 | 3.0 | 1164 | 1.3998 | 0.4516 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Brunomezenga/NN
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: yizhangliu/poca-SoccerTwos-v8 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Bryan190/Aguy190
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: bonadio/poca-SoccerTwos-v2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BumBelDumBel/ZORK_AI_SCIFI
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - classification - generated_from_trainer metrics: - accuracy model-index: - name: clasificador-muchocine results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-muchocine This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4058 - Accuracy: 0.4516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.3498 | 0.3948 | | 1.3866 | 2.0 | 776 | 1.3205 | 0.4310 | | 1.0028 | 3.0 | 1164 | 1.4058 | 0.4516 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
CALM/backup
[ "lean_albert", "transformers" ]
null
{ "architectures": [ "LeanAlbertForPretraining", "LeanAlbertForTokenClassification", "LeanAlbertForSequenceClassification" ], "model_type": "lean_albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - tedlium2 license: cc-by-4.0 --- ## ESPnet2 ASR model ### `pyf98/tedlium2_transducer_conformer_e12_linear2048` This model was trained by Yifan Peng using tedlium2 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout e06c0a97425c4d5deb4d3d14922da1f91504052e pip install -e . cd egs2/tedlium2/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/tedlium2_transducer_conformer_e12_linear2048 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Wed Feb 8 22:07:40 CST 2023` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202301` - pytorch version: `pytorch 1.13.1` - Git hash: `478ba004e114e7862b05fb01112de7f7e1da3996` - Commit date: `Tue Feb 7 00:50:49 2023 +0000` ## asr_train_asr_transducer_conformer_e12_linear2048_raw_en_bpe500_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|14671|93.3|4.5|2.3|1.1|7.8|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|27500|93.2|4.2|2.6|1.0|7.8|65.6| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|78259|97.0|0.9|2.1|1.0|3.9|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|145066|96.9|0.9|2.2|0.9|4.0|65.6| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_transducer_asr_model_valid.loss.ave/dev|466|28296|94.6|3.0|2.4|0.9|6.3|71.2| |decode_asr_transducer_asr_model_valid.loss.ave/test|1155|52113|94.8|2.7|2.5|0.9|6.0|65.6| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transducer_conformer_e12_linear2048.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transducer_conformer_e12_linear2048_raw_en_bpe500_sp ngpu: 1 seed: 2022 num_workers: 6 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 37613 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 5 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe500_sp/train/speech_shape - exp/asr_stats_raw_en_bpe500_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe500_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe500_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.002 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 15000 token_list: - <blank> - <unk> - s - ▁the - t - ▁a - ▁and - ▁to - d - e - ▁of - '''' - n - ing - ▁in - ▁i - ▁that - i - a - l - p - m - y - o - ▁it - ▁we - c - u - ▁you - ed - ▁ - r - ▁is - re - ▁this - ar - g - ▁so - al - b - ▁s - or - ▁f - ▁c - in - k - f - ▁for - ic - er - le - ▁be - ▁do - ▁re - ve - ▁e - ▁w - ▁was - es - ▁they - ly - h - ▁on - v - ▁are - ri - ▁have - an - ▁what - ▁with - ▁t - w - ur - it - ent - ▁can - ▁he - ▁but - ra - ce - ▁me - ▁b - ▁ma - ▁p - ll - ▁st - ▁one - 'on' - ▁about - th - ▁de - en - ▁all - ▁not - il - ▁g - ch - at - ▁there - ▁mo - ter - ation - tion - ▁at - ▁my - ro - ▁as - te - ▁le - ▁con - ▁like - ▁people - ▁or - ▁an - el - ▁if - ▁from - ver - ▁su - ▁co - ate - ▁these - ol - ci - ▁now - ▁see - ▁out - ▁our - ion - ▁know - ect - ▁just - as - ▁ex - ▁ch - ▁d - ▁when - ▁very - ▁think - ▁who - ▁because - ▁go - ▁up - ▁us - ▁pa - ▁no - ies - ▁di - ▁ho - om - ive - ▁get - id - ▁o - ▁hi - un - ▁how - ▁by - ir - et - ck - ity - ▁po - ul - ▁which - ▁mi - ▁some - z - ▁sp - ▁un - ▁going - ▁pro - ist - ▁se - ▁look - ▁time - ment - de - ▁more - ▁had - ng - ▁would - ge - la - ▁here - ▁really - x - ▁your - ▁them - us - me - ▁en - ▁two - ▁k - ▁li - ▁world - ne - ow - ▁way - ▁want - ▁work - ▁don - ▁lo - ▁fa - ▁were - ▁their - age - vi - ▁ha - ac - der - est - ▁bo - am - ▁other - able - ▁actually - ▁sh - ▁make - ▁ba - ▁la - ine - ▁into - ▁where - ▁could - ▁comp - ting - ▁has - ▁will - ▁ne - j - ical - ally - ▁vi - ▁things - ▁te - igh - ▁say - ▁years - ers - ▁ra - ther - ▁than - ru - ▁ro - op - ▁did - ▁any - ▁new - ound - ig - ▁well - mo - ▁she - ▁na - ▁been - he - ▁thousand - ▁car - ▁take - ▁right - ▁then - ▁need - ▁start - ▁hundred - ▁something - ▁over - ▁com - ia - ▁kind - um - if - ▁those - ▁first - ▁pre - ta - ▁said - ize - end - ▁even - ▁thing - one - ▁back - ite - ▁every - ▁little - ry - ▁life - ▁much - ke - ▁also - ▁most - ant - per - ▁three - ▁come - ▁lot - ance - ▁got - ▁talk - ▁per - ▁inter - ▁sa - ▁use - ▁mu - ▁part - ish - ence - ▁happen - ▁bi - ▁mean - ough - ▁qu - ▁bu - ▁day - ▁ga - ▁only - ▁many - ▁different - ▁dr - ▁th - ▁show - ful - ▁down - ated - ▁good - ▁tra - ▁around - ▁idea - ▁human - ous - ▁put - ▁through - ▁five - ▁why - ▁change - ▁real - ff - ible - ▁fact - ▁same - ▁jo - ▁live - ▁year - ▁problem - ▁ph - ▁four - ▁give - ▁big - ▁tell - ▁great - ▁try - ▁va - ▁ru - ▁system - ▁six - ▁plan - ▁place - ▁build - ▁called - ▁again - ▁point - ▁twenty - ▁percent - ▁nine - ▁find - ▁app - ▁after - ▁long - ▁eight - ▁imp - ▁gene - ▁design - ▁today - ▁should - ▁made - ious - ▁came - ▁learn - ▁last - ▁own - way - ▁turn - ▁seven - ▁high - ▁question - ▁person - ▁brain - ▁important - ▁another - ▁thought - ▁trans - ▁create - ness - ▁hu - ▁power - ▁act - land - ▁play - ▁sort - ▁old - ▁before - ▁course - ▁understand - ▁feel - ▁might - ▁each - ▁million - ▁better - ▁together - ▁ago - ▁example - ▁help - ▁story - ▁next - ▁hand - ▁school - ▁water - ▁develop - ▁technology - que - ▁second - ▁grow - ▁still - ▁cell - ▁believe - ▁number - ▁small - ▁between - qui - ▁data - ▁become - ▁america - ▁maybe - ▁space - ▁project - ▁organ - ▁vo - ▁children - ▁book - graph - ▁open - ▁fifty - ▁picture - ▁health - ▁thirty - ▁africa - ▁reason - ▁large - ▁hard - ▁computer - ▁always - ▁sense - ▁money - ▁women - ▁everything - ▁information - ▁country - ▁teach - ▁energy - ▁experience - ▁food - ▁process - qua - ▁interesting - ▁future - ▁science - q - '0' - '5' - '6' - '9' - '3' - '8' - '4' - N - A - '7' - S - G - F - R - L - U - E - T - H - _ - B - D - J - M - ă - ō - ť - '2' - '-' - '1' - C - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: joint_space_size: 320 use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram500/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 aux_ctc_tasks: [] frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 5 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe500_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 report_cer: false report_wer: false preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transducer decoder_conf: rnn_type: lstm num_layers: 1 hidden_size: 256 dropout: 0.1 dropout_embed: 0.2 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202301' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- tags: - espnet - audio - automatic-speech-recognition language: de datasets: - speechcatcher license: mit --- ## Speechcatcher ESPnet streaming ASR model XL for German ASR ### `speechcatcher/speechcatcher_german_espnet_streaming_transformer_26k_train_size_xl_raw_de_bpe1024` This model was trained by bmilde using speechcatcher recipe in [espnet](https://github.com/speechcatcher-asr/espnet/tree/egs2-speechcatcher-de). ### Demo: How to use the model Global installation: ```bash sudo apt-get install portaudio19-dev python3.10-dev ffmpeg # on mac: # brew install portaudio ffmpeg pip3 install git+https://github.com/speechcatcher-asr/speechcatcher speechcatcher -m de_streaming_transformer_xl mediafile.mp4 # or with a microphone: speechcatcher -m de_streaming_transformer_xl -l ``` Virtual environment: ```bash virtualenv -p python3.10 speechcatcher_env source speechcatcher_env/bin/activate pip3 install git+https://github.com/speechcatcher-asr/speechcatcher speechcatcher -m de_streaming_transformer_xl mediafile.mp4 # or with a microphone: speechcatcher -m de_streaming_transformer_xl -l ``` # RESULTS Tuda-de-raw: 2.76% CER (without LM) Tuda-de-raw: 9.65% WER (without LM) Note: Tuda-de-raw results are based on raw tuda-de test utterances without the normalization step. It may not be directly comparable to regular tuda-de results. # Speechcatcher training Speechcatcher models are trained by using Whisper large as a teacher model: ![Speechcatcher Teacher/student training](https://github.com/speechcatcher-asr/speechcatcher/raw/main/speechcatcher_training.svg) See [speechcatcher-data](https://github.com/speechcatcher-asr/speechcatcher-data) for code and more info on replicating the training process. # Sponsors Speechcatcher was gracefully funded by <a href="https://media-tech-lab.com">Media Tech Lab by Media Lab Bayern</a> (<a href="https://github.com/media-tech-lab">@media-tech-lab</a>) <a href="https://media-tech-lab.com"> <img src="https://raw.githubusercontent.com/media-tech-lab/.github/main/assets/mtl-powered-by.png" width="240" title="Media Tech Lab powered by logo"> </a> # Citing ```BibTex @misc{milde2023speechcatcher, author = {Milde, Benjamin}, title = {Speechcatcher}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/speechcatcher-asr/speechcatcher}}, } ``` ## ASR config <details><summary>expand</summary> ``` config: conf/train_asr_streaming_transformer_size_l.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_streaming_transformer_size_l_raw_de_bpe1024 ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 55625 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 20 patience: 3 val_scheduler_criterion: - valid - acc early_stopping_criterion: - valid - acc - max best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 64 valid_batch_size: null batch_bins: 1000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_de_bpe1024/train/speech_shape - exp/asr_stats_raw_de_bpe1024/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_de_bpe1024/valid/speech_shape - exp/asr_stats_raw_de_bpe1024/valid/text_shape.bpe batch_type: folded valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train/wav.scp - speech - sound - - dump/raw/train/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null exclude_weight_decay: false exclude_weight_decay_conf: {} optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ',' - . - t - ▁ - e - en - s - n - ▁ich - ▁das - ▁und - ▁die - er - ▁ist - ▁auch - ▁so - st - ▁der - ▁nicht - ▁es - ▁ein - r - ▁in - f - ▁dann - ▁ja - d - ▁da - g - h - m - o - u - b - ▁wir - ▁zu - ▁du - ▁ge - ▁Und - i - a - ▁mit - ▁den - in - ▁man - l - ▁auf - ▁dass - sch - ▁jetzt - '?' - ge - ▁was - ▁er - ▁Ja - ▁hat - '-' - p - ▁war - ▁eine - ▁F - ▁aber - ▁mal - ▁oder - y - ▁noch - te - ung - ▁haben - ▁Ich - ▁be - ▁Das - ▁wie - ä - ▁an - ▁habe - k - ▁von - ▁sich - ▁K - al - ▁wenn - la - ▁schon - ig - ra - lich - re - de - ch - ▁für - it - ▁Also - w - ▁A - es - ▁sind - ▁ver - le - or - ▁sie - ▁B - ü - ▁also - ▁ganz - ▁T - ▁im - ▁dem - ter - an - ck - ▁St - ▁aus - ▁G - ▁kann - ▁bei - ▁halt - ▁H - el - ▁immer - z - ▁einfach - ▁P - ö - ▁S - ▁weil - ▁mir - se - ▁f - ut - ten - ▁wo - ▁Sch - us - ▁vor - ur - ▁sehr - ri - kt - ing - ▁E - il - ▁gut - ▁mich - ▁Aber - 'on' - und - cht - ▁als - den - ar - ie - um - ▁uns - ste - ▁Da - hr - ▁über - be - ▁einen - ▁Be - ▁ihr - is - ▁wieder - ▁glaube - ▁Ge - at - ▁irgendwie - li - ▁nur - we - ro - ▁bisschen - he - ▁mehr - ▁M - tz - ▁muss - gen - ▁sagen - ben - ▁wirklich - ▁alle - nd - ▁wird - ▁gibt - ▁um - ▁m - ▁natürlich - ▁viel - me - nt - et - ▁diese - ▁U - '0' - ▁sein - ▁nach - ▁hier - ▁meine - ern - lo - ion - ▁eigentlich - ▁O - ▁machen - ▁bin - ▁So - ll - ▁hast - ▁weiß - ▁Re - c - ▁I - ▁sch - ▁C - ▁vielleicht - iert - ach - ▁b - ne - x - ze - rei - ru - ma - ▁zum - ▁finde - ß - ▁N - ▁Die - rt - ich - ▁Ma - uch - ▁eben - rü - ▁Ver - ein - ▁In - R - ieren - ▁Ha - ssen - ft - chen - am - di - der - hl - ▁Es - ▁gesagt - zu - ▁ne - ▁An - ▁k - ▁1 - ▁am - hn - ▁gerade - pp - her - ▁alles - nen - ▁geht - ▁genau - ha - ▁Jahr - ▁re - ▁werden - ▁w - ▁Z - isch - ▁p - ▁Er - ke - ▁Wir - au - mm - ik - ▁mein - ▁dir - ▁einem - un - ▁würde - ▁We - ▁zwei - v - ▁doch - ▁keine - ▁erst - na - and - ▁gar - ▁hin - ▁durch - ▁V - kommen - ell - ul - end - ▁können - j - fe - ▁richtig - ff - ▁Me - ▁andere - lie - '...' - wi - ol - art - ▁Leute - ▁Zeit - ▁Ein - ran - ner - ▁ab - nk - ation - ▁viele - ▁g - S - rie - ▁ob - im - ver - ür - rk - ▁einer - men - ▁ent - iv - lei - ▁gemacht - sp - ▁hatte - ▁weiter - sten - che - ang - all - ir - hör - ▁Was - aus - ier - ▁Ne - ▁Li - ▁hab - ass - L - igen - zi - ungen - ▁Spiel - ▁will - ▁unter - ag - ▁macht - ber - ▁Sp - zen - ▁denn - ken - ▁des - ▁Ka - lle - id - sen - ▁dich - ▁st - ▁Du - ▁kommt - spiel - ▁Fall - ▁Man - ▁Se - ▁W - ▁dieser - ▁Ko - ga - ▁De - ▁groß - ▁Le - ▁schön - ▁La - ▁jeden - ▁D - ▁Genau - gt - ▁dieses - ungs - ▁J - pro - ▁Co - ▁Beispiel - ▁heißt - ▁s - ist - rä - ho - ▁damit - ▁Wo - ▁unsere - ▁le - ert - '5' - ni - tt - gel - ▁her - ve - ▁sondern - mp - reich - ▁Sa - '''' - ▁lang - ▁rein - ▁neu - ▁sagt - ▁tatsächlich - ▁kein - är - nehmen - ▁bis - elt - ad - teil - ▁euch - ta - ▁a - ▁anderen - ▁raus - op - ▁Der - ige - arbeit - ▁Film - ▁Ba - ▁heute - ▁wäre - ▁nochmal - ▁ange - ▁Sie - ick - ▁of - ler - ▁un - ische - weise - lä - kl - ▁Na - iß - wa - ▁wer - ▁Ding - ▁okay - ▁Ra - halt - ▁we - ▁Pa - ▁Thema - heit - ▁ko - ▁Dann - ▁diesen - schaft - ▁möchte - ▁hätte - lu - ▁Al - bar - ▁Tag - mo - ▁Wie - ▁waren - ▁sp - ▁wurde - ▁Auf - ce - ▁Frage - ▁kannst - wo - ▁Mi - ▁deine - ▁To - mi - ▁dazu - äng - ▁bist - ischen - ▁Mo - ▁ihn - 'no' - zieh - ▁Ab - ▁kommen - ▁Menschen - anz - ▁Wenn - ▁ha - ▁Vor - ▁Ro - stell - ▁Zu - ▁je - rau - eln - ab - hin - ka - schau - ▁Pro - ger - P - ▁Bo - ▁gerne - ko - nis - ▁drei - ▁gleich - ld - ▁klar - ack - ▁Aus - ün - ▁nie - A - ▁tr - ▁seine - ▁Mit - geben - ▁soll - '4' - ▁diesem - lau - ▁müssen - ▁kleine - ▁kurz - mmer - ment - stellen - ▁Wa - ▁Podcast - ▁Wi - ▁the - ▁Woche - ▁guck - ▁quasi - ▁Ho - mal - ▁sei - ▁Po - krieg - aff - ▁nächste - itz - ▁20 - tag - '9' - ▁Ende - richt - uck - ör - ▁2 - dem - mpf - vi - 'off' - ▁Leben - ▁wichtig - ▁gesehen - ▁gehen - ress - ▁sag - M - ▁echt - ▁etwas - stand - zähl - führ - T - ▁wenig - ▁zusammen - ▁paar - ▁Di - ▁einmal - bo - ▁sehen - ▁Sachen - ▁Kon - bi - ▁dabei - gend - pass - ic - ▁könnte - ▁Weil - zeit - ▁denke - F - ▁Folge - man - ▁wollte - kauf - ▁weg - ▁3 - ▁selbst - '1' - hol - co - ▁wollen - bau - '2' - B - ▁wahrscheinlich - ank - ▁Mal - ▁letzten - fahren - ▁vom - ▁Do - hi - ▁eher - D - ▁selber - ord - ▁super - ▁musst - ▁drauf - ▁jemand - '8' - ▁gegen - ▁überhaupt - ▁The - ▁Okay - ▁beim - ▁sage - pa - ▁dafür - vor - ▁Frau - ▁hatten - ▁drin - '6' - ▁sozusagen - iz - ▁fand - ▁Tra - folg - ▁Nach - ▁tun - ▁dein - ität - C - ▁Oder - ▁zurück - ▁Nein - po - ▁cool - ▁sowas - ▁sieht - gehen - schi - ▁Gott - ▁schnell - form - ▁ihm - ▁besser - ▁gab - wä - ▁äh - ▁Kinder - änder - ▁sollte - ▁Jo - ▁voll - ▁War - ▁kenne - ▁zwar - ▁total - ▁welche - ▁passiert - ▁Hand - fall - ▁irgendwann - ▁Problem - war - qu - fühl - ▁Wer - ▁wissen - ▁dort - ▁jeder - ca - ▁deswegen - sprech - ▁davon - ▁damals - trag - ▁nämlich - ▁Punkt - ▁Welt - ▁abge - '7' - log - ▁sogar - ▁kam - legen - ▁Moment - igkeit - ▁konnte - ▁komm - ▁gewesen - ▁anders - ▁Bi - K - ▁eigene - ▁liebe - ▁Teil - ▁Lo - ▁toll - ▁Arbeit - ▁Seite - genommen - ▁to - ▁alt - ▁trotzdem - ▁gehört - ▁Jetzt - ▁mache - ▁Dr - ▁relativ - sicht - ▁steht - ▁Auto - ▁darüber - nehm - ▁irgendwas - ▁ohne - ▁Geld - ▁Euro - ieß - suche - ▁vier - einander - ▁Grund - ▁Gefühl - gestellt - ▁sa - ativ - G - ▁darauf - I - ▁All - ▁Anfang - ▁darf - ▁Freund - ▁direkt - ▁irgendwo - ▁letzte - ▁schlecht - ▁manchmal - ▁Bild - ▁Geschichte - ▁interessant - E - ▁komplett - ▁Ahnung - bringen - nutz - bild - ▁frag - V - ▁Kind - ▁meisten - ▁gehabt - ▁gedacht - ▁erstmal - ▁fast - ▁stimmt - '3' - laufen - ▁bestimmt - zahl - ▁Über - kommt - gegangen - setzen - ▁funktioniert - ▁spielen - ▁Person - ▁Sinn - ▁dachte - ▁fünf - ▁hoch - bereit - ▁brauche - ▁zwischen - ▁Spaß - ▁spannend - ▁ehrlich - ▁krass - ▁schreib - ▁zumindest - zeug - ▁Musik - W - fahr - ▁solche - ▁Deutschland - ▁gespielt - geschrieben - Ä - ▁später - Y - O - H - '!' - U - N - Q - Ö - X - Z - J - '%' - Ü - é - « - » - '&' - à - à - ş - q - ¤ - Ÿ - € - è - ı - ç - ú - ë - ¶ - á - ć - — - õ - ğ - í - ° - ô - _ - ó - / - å - $ - ́ - û - › - ê - ‹ - '"' - ñ - Ş - č - ) - É - μ - ø - š - о - ł - ù - ã - ā - © - а - ':' - е - œ - и - н - â - î - т - ń - р - к - 你 - æ - „ - Č - с - ♪ - д - Š - в - ï - İ - л - À - у - ь - я - м - ę - ś - ž - п - '=' - ō - ř - Æ - ш - з - ы - ū - ș - Ø - '~' - ì - ò - ο - ч - г - ý - ̄ - ц - Х - ż - З - б - ¡ - Н - ă - ̃ - К - ж - ไ - ồ - ♫ - ر - х - ン - Ç - § - ⁄ - + - '*' - Å - і - Á - ī - џ - ู - ; - '>' - Î - ą - Đ - Ȗ - Ε - έ - δ - ι - λ - ς - τ - υ - ύ - О - Т - و - ک - ں - ด - ม - ่ - ṣ - “ - ♥ - き - つ - ぶ - ら - チ - ッ - ホ - ロ - 中 - 以 - 佢 - 利 - 厲 - 句 - 可 - 吃 - 国 - 士 - 好 - 安 - 害 - 度 - 手 - 晃 - 法 - Ć - ě - Б - ج - 救 - ά - – - ダ - 制 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: bpe bpemodel: data/de_token_list/bpe_unigram1024/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: n_fft: 512 win_length: 400 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_de_bpe1024/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: contextual_block_transformer encoder_conf: output_size: 256 attention_heads: 8 linear_units: 2048 num_blocks: 22 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.0 input_layer: conv2d normalize_before: true block_size: 40 hop_size: 16 look_ahead: 16 init_average: true ctx_pos_enc: true postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202211' distributed: true ``` </details>
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "has_space" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19,850
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://singularite.itch.io/pyramids 2. Step 1: Write your model_id: asuzuki/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
63
null
--- license: mit tags: - generated_from_trainer datasets: - stereoset metrics: - accuracy model-index: - name: gpt2_stereoset_finetuned results: - task: name: Text Classification type: text-classification dataset: name: stereoset type: stereoset config: intersentence split: validation args: intersentence metrics: - name: Accuracy type: accuracy value: 0.7087912087912088 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_stereoset_finetuned This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the stereoset dataset. It achieves the following results on the evaluation set: - Loss: 0.6545 - Accuracy: 0.7088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.21 | 5 | 1.1855 | 0.5259 | | No log | 0.42 | 10 | 0.7056 | 0.5338 | | No log | 0.62 | 15 | 0.7009 | 0.5400 | | No log | 0.83 | 20 | 0.7230 | 0.5173 | | No log | 1.04 | 25 | 0.6666 | 0.5989 | | No log | 1.25 | 30 | 0.6812 | 0.5699 | | No log | 1.46 | 35 | 0.6479 | 0.6272 | | No log | 1.67 | 40 | 0.6323 | 0.6484 | | No log | 1.88 | 45 | 0.6306 | 0.6515 | | No log | 2.08 | 50 | 0.6474 | 0.6633 | | No log | 2.29 | 55 | 0.6158 | 0.6641 | | No log | 2.5 | 60 | 0.6059 | 0.6703 | | No log | 2.71 | 65 | 0.6151 | 0.6695 | | No log | 2.92 | 70 | 0.5860 | 0.6782 | | No log | 3.12 | 75 | 0.5808 | 0.6907 | | No log | 3.33 | 80 | 0.5953 | 0.6915 | | No log | 3.54 | 85 | 0.5860 | 0.6994 | | No log | 3.75 | 90 | 0.5918 | 0.6947 | | No log | 3.96 | 95 | 0.5915 | 0.6797 | | No log | 4.17 | 100 | 0.5779 | 0.7041 | | No log | 4.38 | 105 | 0.5902 | 0.7151 | | No log | 4.58 | 110 | 0.5740 | 0.7080 | | No log | 4.79 | 115 | 0.5640 | 0.7088 | | No log | 5.0 | 120 | 0.5786 | 0.6947 | | No log | 5.21 | 125 | 0.5892 | 0.6978 | | No log | 5.42 | 130 | 0.5722 | 0.7096 | | No log | 5.62 | 135 | 0.5743 | 0.7064 | | No log | 5.83 | 140 | 0.5873 | 0.7057 | | No log | 6.04 | 145 | 0.5915 | 0.7033 | | No log | 6.25 | 150 | 0.5978 | 0.7009 | | No log | 6.46 | 155 | 0.6034 | 0.6931 | | No log | 6.67 | 160 | 0.5908 | 0.7111 | | No log | 6.88 | 165 | 0.5954 | 0.6947 | | No log | 7.08 | 170 | 0.5882 | 0.7033 | | No log | 7.29 | 175 | 0.5895 | 0.7151 | | No log | 7.5 | 180 | 0.6077 | 0.7104 | | No log | 7.71 | 185 | 0.6121 | 0.7151 | | No log | 7.92 | 190 | 0.6086 | 0.7151 | | No log | 8.12 | 195 | 0.6182 | 0.7127 | | No log | 8.33 | 200 | 0.6412 | 0.7072 | | No log | 8.54 | 205 | 0.6425 | 0.7049 | | No log | 8.75 | 210 | 0.6369 | 0.7135 | | No log | 8.96 | 215 | 0.6405 | 0.7111 | | No log | 9.17 | 220 | 0.6431 | 0.7135 | | No log | 9.38 | 225 | 0.6474 | 0.7127 | | No log | 9.58 | 230 | 0.6595 | 0.7041 | | No log | 9.79 | 235 | 0.6580 | 0.7041 | | No log | 10.0 | 240 | 0.6545 | 0.7088 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
null
--- license: creativeml-openrail-m language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers - artwork - HDR photography - safetensors - photos inference: true --- # Foto Assisted Diffusion (FAD)_V0 This model is meant to mimic a modern HDR photography style It was trained on 600 HDR images on SD1.5 and works best at **768x768** resolutions Merged with one of my own models for illustrations and drawings, to increase flexibility # Features: * **No additional licensing** * **Multi-resolution support** * **HDR photographic outputs** * **No Hi-Res fix required** * [**Spreadsheet with supported resolutions, keywords for prompting and other useful hints/tips**](https://docs.google.com/spreadsheets/d/1RGRLZhgiFtLMm5Pg8qK0YMc6wr6uvj9-XdiFM877Pp0/edit#gid=364842308) # Example Cards: Below you will find some example cards that this model is capable of outputting. You can acquire the images used here: [HF](https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/tree/main/Model%20Examples) or [Google Drive](https://docs.google.com/spreadsheets/d/1RGRLZhgiFtLMm5Pg8qK0YMc6wr6uvj9-XdiFM877Pp0/edit#gid=364842308). Google Drive gives you them all at once without needing to clone the repo, which is easier. If you decide to clone it, set ``` GIT_LFS_SKIP_SMUDGE=1 ``` to skip downloading large files Place them into an EXIF viewer such as the built in "PNG Info" tab in the popular Auto1111 repository to quickly copy the parameters and replicate them! ## 768x768 Food <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Food.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 Landscapes <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Landscapes.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 People <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20People.jpg" style="max-width: 800px;" width="100%"/> ## 768x768 Random <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/768x768%20Random.jpg" style="max-width: 800px;" width="100%"/> ## 512x512 Artwork <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/512x512%20Artwork.jpg" style="max-width: 800px;" width="100%"/> ## 512x512 Photos <img src="https://huggingface.co/Dunkindont/Foto-Assisted-Diffusion-FAD_V0/resolve/main/512x512%20Photo.jpg" style="max-width: 800px;" width="100%"/> ## Cloud Support Sinkin kindly hosted our model. [Click here to run it on the cloud](https://sinkin.ai/m/V6vYoaL)! ## License *My motivation for making this model was to have a free, non-restricted model for the community to use and for startups.* *I was noticing the models people gravitated towards, were merged models which had prior license requirements from the people who trained them.* *This was just a fun project I put together for you guys.* *My fun ended when I posted the results :D* *Enjoy! Sharing is caring :)*
CAMeL-Lab/bert-base-arabic-camelbert-msa-eighth
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: gatardochi/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
229
null
It's The_lovely_quality_kahlua_flavour (aka tlqkf)
CAMeL-Lab/bert-base-arabic-camelbert-msa-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1629 - F1: 0.8584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2904 | 1.0 | 715 | 0.1823 | 0.8286 | | 0.1446 | 2.0 | 1430 | 0.1626 | 0.8488 | | 0.0941 | 3.0 | 2145 | 0.1629 | 0.8584 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: en tags: - bridgetower - gaudi license: mit datasets: - conceptual_captions - conceptual_12m - sbu_captions - visual_genome - mscoco_captions --- # BridgeTower large-itm-mlm-itc model The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in [this repository](https://github.com/microsoft/BridgeTower). BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/). ## Model description The abstract from the paper is the following: Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. ## Intended uses & limitations ### How to use Here is how to use this model to perform contrastive learning between image and text pairs: ```python from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning import requests from PIL import Image import torch image_urls = [ "https://farm4.staticflickr.com/3395/3428278415_81c3e27f15_z.jpg",    "http://images.cocodataset.org/val2017/000000039769.jpg"] texts = [ "two dogs in a car", "two cats sleeping on a couch"] images = [Image.open(requests.get(url, stream=True).raw) for url in image_urls] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm") model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") inputs  = processor(images, texts, padding=True, return_tensors="pt") outputs = model(**inputs) inputs  = processor(images, texts[::-1], padding=True, return_tensors="pt") outputs_swapped = model(**inputs) print('Loss', outputs.loss.item()) # Loss 0.00191505195107311 print('Loss with swapped images', outputs_swapped.loss.item()) # Loss with swapped images 2.1259872913360596 ``` Here is how to use this model to perform image and text matching ```python from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # forward pass scores = dict() for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0,1].item() ``` Here is how to use this model to perform masked language modeling: ```python from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000360943.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") text = "a <mask> looking out of the window" processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) print(results) #.a cat looking out of the window. ``` ## Training data The BridgeTower model was pretrained on four public image-caption datasets: - [Conceptual Captions (CC3M)](https://ai.google.com/research/ConceptualCaptions/) - [Conceptual 12M (CC12M)](https://github.com/google-research-datasets/conceptual-12m) - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/) - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf) - [Visual Genome](https://visualgenome.org/) The total number of unique images in the combined data is around 14M. ## Training procedure ### Pretraining The model was pre-trained for 10 epochs on an Intel AI supercomputing cluster using 512 Gaudis and 128 Xeons with a batch size of 2048. The optimizer used was AdamW with a learning rate of 1e-7. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 294 x 294. ## Evaluation results Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks. ### BibTeX entry and citation info ```bibtex @article{xu2022bridge, title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning}, author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan}, journal={arXiv preprint arXiv:2206.08657}, year={2022} } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-sixteenth
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - f1 - recall - precision model-index: - name: vit-base-patch16-224-in21k-weather-images-classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: data split: train args: data metrics: - name: Accuracy type: accuracy value: 0.9339762611275965 language: - en pipeline_tag: image-classification --- # vit-base-patch16-224-in21k-weather-images-classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2255 - Accuracy: 0.9340 - Weighted f1: 0.9341 - Micro f1: 0.9340 - Macro f1: 0.9372 - Weighted recall: 0.9340 - Micro recall: 0.9340 - Macro recall: 0.9354 - Weighted precision: 0.9347 - Micro precision: 0.9340 - Macro precision: 0.9398 ## Model description This is a classification model of weather images. For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Weather%20Images/Weather_Images_ViT.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/jehanbhathena/weather-dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 2.4333 | 1.0 | 337 | 0.3374 | 0.9036 | 0.9028 | 0.9036 | 0.9080 | 0.9036 | 0.9036 | 0.9002 | 0.9088 | 0.9036 | 0.9234 | | 0.4422 | 2.0 | 674 | 0.2504 | 0.9228 | 0.9226 | 0.9228 | 0.9285 | 0.9228 | 0.9228 | 0.9273 | 0.9248 | 0.9228 | 0.9318 | | 0.1051 | 3.0 | 1011 | 0.2255 | 0.9340 | 0.9341 | 0.9340 | 0.9372 | 0.9340 | 0.9340 | 0.9354 | 0.9347 | 0.9340 | 0.9398 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.8.0 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-msa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,967
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: gatardochi/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAUKiel/JavaBERT
[ "pytorch", "safetensors", "bert", "fill-mask", "code", "arxiv:2110.10404", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8766666666666667 - name: F1 type: f1 value: 0.8786885245901639 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3251 - Accuracy: 0.8767 - F1: 0.8787 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.13.2
CLTL/gm-ner-xlmrbase
[ "pytorch", "tf", "xlm-roberta", "token-classification", "nl", "transformers", "dighum", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2023-02-11T01:27:39Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 263.71 +/- 15.93 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CLTL/icf-domains
[ "pytorch", "roberta", "nl", "transformers", "license:mit", "text-classification" ]
text-classification
{ "architectures": [ "RobertaForMultiLabelSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks woman tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - juye/_output These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
CLTL/icf-levels-adm
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
2023-02-11T01:35:52Z
--- tags: - zero-shot-image-classification - clip license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a soup (weight average) of 3 fine-tunes of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It is an average of 3 fine-tunes from the final checkpoint of the original 256x256 training run w/ an additional ~2-3B samples for each fine-tune and a lower learning rate. Each fine-tune was a different learning rate (1e-4, 6e-5, 5e-5), and diff # of samples (3.2B, 2B, 2.5B). At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ``` @InProceedings{pmlr-v162-wortsman22a, title = {Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time}, author = {Wortsman, Mitchell and Ilharco, Gabriel and Gadre, Samir Ya and Roelofs, Rebecca and Gontijo-Lopes, Raphael and Morcos, Ari S and Namkoong, Hongseok and Farhadi, Ali and Carmon, Yair and Kornblith, Simon and Schmidt, Ludwig}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23965--23998}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wortsman22a/wortsman22a.pdf}, url = {https://proceedings.mlr.press/v162/wortsman22a.html} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
CLTL/icf-levels-att
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### mriverdb Dreambooth model trained by Patrickrpds with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
CLTL/icf-levels-ber
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
2023-02-11T01:37:59Z
--- tags: - zero-shot-image-classification - clip license: mit library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate. At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
CLTL/icf-levels-enr
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.844885598923284 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2676 - F1: 0.8449 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5915 | 1.0 | 191 | 0.3285 | 0.7814 | | 0.2651 | 2.0 | 382 | 0.2707 | 0.8314 | | 0.174 | 3.0 | 573 | 0.2676 | 0.8449 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
CLTL/icf-levels-fac
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- tags: - vision - image-classification widget: - src: https://pict.sindonews.net/dyn/620/content/2015/02/24/149/968209/bekasi-kota-terkotor-keempat-di-jawa-barat-hAh-thumb.jpg example_title: Kotor - src: https://asset-a.grid.id//crop/0x0:0x0/360x240/photo/2021/11/15/jalan-raya-minjpg-20211115071051.jpg example_title: Bersih co2_eq_emissions: emissions: 1.5317579633796956 --- ## Validation Metrics - Loss: 0.004 - Accuracy: 1.000 - Precision: 1.000 - Recall: 1.000 - AUC: 1.000 - F1: 1.000
CLTL/icf-levels-mbw
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- language: - ja --- Mrs.Hudson - keyword:hudson - Sample Prompt:best quality,furry female, hudson, 1girl, solo, apron, smile, door, dress, white apron, long sleeves, indoors, pink dress, looking at viewer, dog girl, closed mouth, short hair, tail, bangs, furry, maid <lora:hudson-epoch08:1> - Negative Prompt: (worst quality,low quality:1.4),bad anatomy,3d,(animal tail,dog tail:1.4) - ![Sample_image](https://huggingface.co/nanashisan/Sherlock-Hound/resolve/main/hudson/hudson-epoch08.png) - ![Sample_image](https://huggingface.co/nanashisan/Sherlock-Hound/resolve/main/hudson/hudson_sampkle_xy.png) Model - hudson-epoch08.safetensors - 推奨:適切な学習でサンプルはこのバージョンで出力 - hudson-epoch06.safetensors - 学習不足:犬顔にならず人間顔が表示される事がある - hudson-epoch10.safetensors - 過学習:画質がやや劣化した
CNT-UPenn/Bio_ClinicalBERT_for_seizureFreedom_classification
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8205546492659054 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2467 - F1: 0.8206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7897 | 1.0 | 70 | 0.3096 | 0.7519 | | 0.2819 | 2.0 | 140 | 0.2603 | 0.8093 | | 0.1818 | 3.0 | 210 | 0.2467 | 0.8206 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
CNT-UPenn/RoBERTa_for_seizureFrequency_QA
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-02-11T02:29:24Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: pablo tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - pablo-pic These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "pablo" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: picture of pablo ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
Calamarii/calamari
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 --- ## GPT2 fine-tuned with COVID-19 question and answer pairs using Reinforcement Learning with Human Feedback (RLHF) and Proximal Policy Optimization (PPO). Uses PPO and TRL library to align the response based on BERTScore towards the expected response. You can ask the model any question related to COVID-19 in this format: **question: should i wear a mask at home?\nanswer:** You can also add a CTRL token as a special prefix token in front of your prompt to align it to your preference. For example, you either add a [good] or [bad] token as a prefix. **[good]question: should i wear a mask at school?\nanswer:** Good and bad here refers to the quality of the response. Being more aligned to the expected (ground truth) response is good and not is bad.
Cameron/BERT-jigsaw-identityhate
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: break bad tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - walter-white These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "break bad" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: break bad ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
Cameron/BERT-mdgender-convai-ternary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
--- language: - pt license: apache-2.0 tags: - toxicity - portuguese - hate speech - offensive language - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: dougtrajano/toxic-comment-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dougtrajano/toxic-comment-classification This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the OLID-BR dataset. It achieves the following results on the evaluation set: - Loss: 0.4102 - Accuracy: 0.8547 - F1: 0.8549 - Precision: 0.8669 - Recall: 0.8547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.255788747459486e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1993 - optimizer: Adam with betas=(0.8445637934160373,0.8338816842140165) and epsilon=2.527092625455385e-08 - lr_scheduler_type: linear - num_epochs: 30 - label_smoothing_factor: 0.07158711257743958 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.4465 | 1.0 | 1408 | 0.4102 | 0.8547 | 0.8549 | 0.8669 | 0.8547 | | 0.3839 | 2.0 | 2816 | 0.4814 | 0.8509 | 0.8497 | 0.8532 | 0.8509 | | 0.3945 | 3.0 | 4224 | 0.6362 | 0.8002 | 0.7918 | 0.8258 | 0.8002 | | 0.3643 | 4.0 | 5632 | 0.4961 | 0.8248 | 0.8211 | 0.8349 | 0.8248 | | 0.3345 | 5.0 | 7040 | 0.5267 | 0.8528 | 0.8532 | 0.8570 | 0.8528 | | 0.3053 | 6.0 | 8448 | 0.5902 | 0.8002 | 0.7911 | 0.8292 | 0.8002 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.10.2+cu113 - Datasets 2.9.0 - Tokenizers 0.13.2
Cameron/BERT-mdgender-wizard
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- language: - pt license: apache-2.0 tags: - toxicity - portuguese - hate speech - offensive language - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: dougtrajano/toxicity-type-detection results: [] datasets: - dougtrajano/olid-br library_name: transformers --- # dougtrajano/toxicity-type-detection Toxicity Type Detection is a model that predicts the type(s) of toxicity(s) in a given text. Toxicity Labels: `health`, `ideology`, `insult`, `lgbtqphobia`, `other_lifestyle`, `physical_aspects`, `profanity_obscene`, `racism`, `sexism`, `xenophobia` This BERT model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the [OLID-BR dataset](https://huggingface.co/datasets/dougtrajano/olid-br). ## Overview **Input:** Text in Brazilian Portuguese **Output:** Multilabel classification (toxicity types) ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dougtrajano/toxicity-type-detection") model = AutoModelForSequenceClassification.from_pretrained("dougtrajano/toxicity-type-detection") ``` ## Limitations and bias The following factors may degrade the model’s performance. **Text Language**: The model was trained on Brazilian Portuguese texts, so it may not work well with Portuguese dialects. **Text Origin**: The model was trained on texts from social media and a few texts from other sources, so it may not work well on other types of texts. ## Trade-offs Sometimes models exhibit performance issues under particular circumstances. In this section, we'll discuss situations in which you might discover that the model performs less than optimally, and should plan accordingly. **Text Length**: The model was fine-tuned on texts with a word count between 1 and 178 words (average of 18 words). It may give poor results on texts with a word count outside this range. ## Performance The model was evaluated on the test set of the [OLID-BR](https://dougtrajano.github.io/olid-br/) dataset. **Accuracy:** 0.4214 **Precision:** 0.8180 **Recall:** 0.7230 **F1-Score:** 0.7645 | Label | Precision | Recall | F1-Score | Support | | :---: | :-------: | :----: | :------: | :-----: | | `health` | 0.3182 | 0.1795 | 0.2295 | 39 | | `ideology` | 0.6820 | 0.6842 | 0.6831 | 304 | | `insult` | 0.9689 | 0.8068 | 0.8805 | 1351 | | `lgbtqphobia` | 0.8182 | 0.5870 | 0.6835 | 92 | | `other_lifestyle` | 0.4242 | 0.4118 | 0.4179 | 34 | | `physical_aspects` | 0.4324 | 0.5783 | 0.4948 | 83 | | `profanity_obscene` | 0.7482 | 0.7509 | 0.7496 | 562 | | `racism` | 0.4737 | 0.3913 | 0.4286 | 23 | | `sexism` | 0.5132 | 0.3391 | 0.4084 | 115 | | `xenophobia` | 0.3333 | 0.4375 | 0.3784 | 32 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.044186985160909e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1993 - optimizer: Adam with betas=(0.9339215524915885,0.9916979096990963) and epsilon=3.4435900142455904e-07 - lr_scheduler_type: linear - num_epochs: 30 ### Framework versions - Transformers 4.26.0 - Pytorch 1.10.2+cu113 - Datasets 2.9.0 - Tokenizers 0.13.2 ## Provide Feedback If you have any feedback on this model, please [open an issue](https://github.com/DougTrajano/ToChiquinho/issues/new) on GitHub.
Capreolus/birch-bert-large-car_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-02-11T04:22:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-finetuned-t5-Thor4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-t5-Thor4 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5607 - Rouge1: 30.1917 - Rouge2: 17.6334 - Rougel: 26.8513 - Rougelsum: 28.7606 - Gen Len: 18.9881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9251 | 1.0 | 675 | 1.6082 | 29.3372 | 16.9607 | 26.1096 | 27.9357 | 18.9874 | | 1.763 | 2.0 | 1350 | 1.5696 | 30.1869 | 17.5627 | 26.8425 | 28.7413 | 18.9881 | | 1.7139 | 3.0 | 2025 | 1.5607 | 30.1917 | 17.6334 | 26.8513 | 28.7606 | 18.9881 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Capreolus/electra-base-msmarco
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
110
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: local_test_model_with_local_dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # local_test_model_with_local_dataset This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5566 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 10.0 | 10 | 3.4660 | 85.7143 | | No log | 20.0 | 20 | 0.7373 | 10.7143 | | 3.3998 | 30.0 | 30 | 0.5920 | 0.0 | | 3.3998 | 40.0 | 40 | 0.5566 | 0.0 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
2023-02-11T04:45:49Z
トリガー:ganimata ![grid-0591.png](https://s3.amazonaws.com/moonup/production/uploads/1676090917228-63e120f555a963bd08666ce3.png)
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PP0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -1041.43 +/- 512.06 name: mean_reward verified: false --- # **PP0** Agent playing **LunarLander-v2** This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CarlosPR/mt5-spanish-memmories-analysis
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: restaurant_test_at_local results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # restaurant_test_at_local This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5402 - Wer: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 10.0 | 10 | 3.7743 | 100.0 | | No log | 20.0 | 20 | 0.6750 | 52.6316 | | 3.6042 | 30.0 | 30 | 0.5724 | 0.0 | | 3.6042 | 40.0 | 40 | 0.5402 | 0.0 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.11.0+cu115 - Datasets 2.9.0 - Tokenizers 0.13.2
dccuchile/albert-tiny-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
# Twitter-scratch-roBERTa-base This is a RoBERTa-base model trained from scratch on ~58M tweets, as described and evaluated in the [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). To evaluate this and other LMs on Twitter-specific data, please refer to the [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval). ## Preprocess Text Replace usernames and links for placeholders: "@user" and "http". ```python def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) ``` ## Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer import numpy as np MODEL = "cardiffnlp/twitter-scratch-roberta-base" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def print_candidates(): for i in range(5): token = tokenizer.decode(candidates[i]['token']) score = np.round(candidates[i]['score'], 4) print(f"{i+1}) {token} {score}") texts = [ "I am so <mask> 😊", "I am so <mask> 😢" ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) print_candidates() ``` Output: ``` ------------------------------ I am so <mask> 😊 1) happy 0.530 2) grateful 0.083 3) excited 0.078 4) thankful 0.053 5) blessed 0.041 ------------------------------ I am so <mask> 😢 1) sad 0.439 2) sorry 0.088 3) tired 0.045 4) hurt 0.026 5) upset 0.026 ``` ### BibTeX entry and citation info Please cite the [reference paper](https://aclanthology.org/2020.findings-emnlp.148/) if you use this model. ```bibtex @inproceedings{barbieri-etal-2020-tweeteval, title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification", author = "Barbieri, Francesco and Camacho-Collados, Jose and Espinosa Anke, Luis and Neves, Leonardo", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.findings-emnlp.148", doi = "10.18653/v1/2020.findings-emnlp.148", pages = "1644--1650" } ```
dccuchile/albert-tiny-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
393
2023-02-11T06:57:14Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.48 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kaliputra/q-Taxi-v3-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-02-11T07:07:16Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 39.30 +/- 29.67 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="kaliputra/q-Taxi-v3-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Chakita/Friends
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-02-11T08:09:10Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: finetuning11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning11 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00024 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 0.0 | 0.31 | 500 | nan | 1.0 | | 0.0 | 0.61 | 1000 | nan | 1.0 | | 0.0 | 0.92 | 1500 | nan | 1.0 | | 0.0 | 1.23 | 2000 | nan | 1.0 | | 0.0 | 1.54 | 2500 | nan | 1.0 | | 0.0 | 1.84 | 3000 | nan | 1.0 | | 0.0 | 2.15 | 3500 | nan | 1.0 | | 0.0 | 2.46 | 4000 | nan | 1.0 | | 0.0 | 2.77 | 4500 | nan | 1.0 | | 0.0 | 3.07 | 5000 | nan | 1.0 | | 0.0 | 3.38 | 5500 | nan | 1.0 | | 0.0 | 3.69 | 6000 | nan | 1.0 | | 0.0 | 4.0 | 6500 | nan | 1.0 | | 0.0 | 4.3 | 7000 | nan | 1.0 | | 0.0 | 4.61 | 7500 | nan | 1.0 | | 0.0 | 4.92 | 8000 | nan | 1.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Chakita/KROBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-ASR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-ASR This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7176 - Wer: 112.3086 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0201 | 6.8 | 1000 | 0.4782 | 24.0338 | | 0.0006 | 13.61 | 2000 | 0.6535 | 76.6110 | | 0.0002 | 20.41 | 3000 | 0.7004 | 102.1109 | | 0.0002 | 27.21 | 4000 | 0.7176 | 112.3086 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Clint/clinton
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - Segments - mapping - keras - object-segmentation library_name: keras pipeline_tag: image-segmentation --- Kaggle Notebook: https://www.kaggle.com/code/kmader/segmenting-buildings-in-satellite-images Dataset: https://www.kaggle.com/datasets/kmader/synthetic-word-ocr Hugging Face Space: https://huggingface.co/spaces/deprem-ml/deprem_keras-satellite_semantic_mapping-challange
CoderEFE/DialoGPT-marxbot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "has_space" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2023-02-11T11:07:43Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- Balor-V2は他のモデルとマージすることで描画を向上させることを目標に作成されたマージモデルです。 AIアートにおいて、服装や背景の書き込みが増えるとキャラクターの目の描写があいまいになることが多くあります。 このモデルでは、特にこの目の描写の質を保つことを目標としています。 Balor-V2 is a merged model created with the goal of improving rendering by merging with other models. In AI art, the depiction of a character's eyes often becomes fuzzy as more clothing and backgrounds are written. This model specifically aims to maintain the quality of the depiction of these eyes. <img src="https://i.imgur.com/QC5223V.jpg" width="450" height=""> <img src="https://i.imgur.com/e3Bj3fq.jpg" width="450" height=""> <img src="https://i.imgur.com/pM6KZYI.png" width="450" height=""> Balor-V2は以下の配分でほかの自由に選んだモデルModelAと階層マージを行うことで、そのモデルにBalor-V2の要素を与えることが可能です。 Balor-V2 can be hierarchically merged with another freely chosen model ModelA using the following distribution to give that model the elements of BalorV2. | Model: A | Model: B | Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | ModelA | BalorV2 | 1,1,1,0,0,0,0,1,1,1,0,0,0,1,1,0,0,0,0,0,0,0,1,1,1 | 1 | BalorV2featModelA | 配布しているものはそれぞれAbyssOrangeMix2_sfw、Counterfeit-V2.5、Evt_MをModelAに置いています。 すべての要素のマージ比率が1のため、他のモデルModelBに対してBalorV2featModelAを使用して、BalorV2featModelBを作成することが可能です。 流行のマージモデルなども使用することができます。 The distributions are placed in ModelA with AbyssOrangeMix2_sfw, Counterfeit-V2.5, and Evt_M, respectively. Since the merge ratio of all elements is 1, it is possible to create a BalorV2featModelB using BalorV2featModelA for the other model ModelB. マージ比率の記述にあたりWarriorMama777/OrangeMixsのモデルカードを参考としました。感謝申し上げます。 I used the WarriorMama777/OrangeMixes model card as a reference to describe the merge ratio.
CoderEFE/DialoGPT-medium-marx
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-02-11T11:24:49Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - diffusers --- <p align="center"><img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00122-1387049655-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20street%2C%20apartment%2C%20building%2C%20pavement%2C%20trees%2C%20flow.png"> <img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00123-3601874250-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20street%2C%20apartment%2C%20building%2C%20pavement%2C%20trees%2C%20flow.png"> <img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00128-3652958527-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20city%2C%20(skyscrapers)%2C%20sky%2C%20wide%20street%2C%20pavement%2C%20t.png"> <img src="https://huggingface.co/0RisingStar0/LiveArcaMix/resolve/main/00135-2463548988-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo%2C%20cowboy%20shot))%2C%20city%2C%20(skyscrapers)%2C%20sky%2C%20wide%20street%2C%20pavement%2C%20t.png"></p> <center><b>LiveArcaMix for channel competition.</b></center> 1. AikimiXPv1.0 + CounterfeitV2.0 - Preset GRAD_A base_alpha : 0 => AModel 2. AModel + CounterfeitV2.5 - IN09, IN10, OUT01, OUT02 : 1, else : 0 base_alpha : 0 => BModel 3. BModel + BasilMixFixed - Preset MID12_50, base_alpha : 1 => CModel 4. CModel + Anything V4.5 - Preset FLAT_75 base_alpha : 0 => DModel 5. CounterfeitV2.5 + AikimiXPv1.0 - Preset FLAT_25 => EModel 6. DModel + EModel - Preset RING10_5 => Result(LiveArcaMix)