repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
SeNSiTivE/RL-Course-Unit_2-q-Taxi-v3
SeNSiTivE
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
382
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="SeNSiTivE/RL-Course-Unit_2-q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
EstherT/sentence-acceptability
EstherT
bert
10
11
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['classification', 'generated_from_trainer']
true
true
true
1,559
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentence-acceptability This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8257 - Accuracy: 0.8217 ## Model description This model classifies English sentences according to two different labels: 1 if the sentence is grammatically acceptable and 0 if the sentence is grammatically unacceptable. ## Training and evaluation data The model was trained on the "cola" split of the glue dataset, using the 8551 instances of its "train" split. For the evaluation, the 1043 sentences of the "evaluation" split were used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4868 | 1.0 | 1069 | 0.6279 | 0.7862 | | 0.3037 | 2.0 | 2138 | 0.6184 | 0.8140 | | 0.177 | 3.0 | 3207 | 0.8257 | 0.8217 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
spatial/PyramidsTraining
spatial
null
16
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
false
true
true
834
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: spatial/PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
alibidaran/mt5-small-finetuned-amazon-en-es
alibidaran
mt5
25
0
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,512
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0300 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.6964 | 1.0 | 1209 | 3.3036 | | 3.9031 | 2.0 | 2418 | 3.1324 | | 3.5802 | 3.0 | 3627 | 3.0846 | | 3.4212 | 4.0 | 4836 | 3.0613 | | 3.3216 | 5.0 | 6045 | 3.0606 | | 3.2427 | 6.0 | 7254 | 3.0392 | | 3.2081 | 7.0 | 8463 | 3.0344 | | 3.1806 | 8.0 | 9672 | 3.0300 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
stebuc/deepRLcourse-ppo-LunarLanderv2
stebuc
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Haruzo/heroes-iii-towns-model
Haruzo
null
65
53
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
5,334
### Heroes-III-towns-model Dreambooth model trained by Haruzo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(36).jpg) ![1](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(37).jpg) ![2](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(25).jpg) ![3](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(26).jpg) ![4](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(24).jpg) ![5](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(3).jpg) ![6](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(29).jpg) ![7](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(39).jpg) ![8](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(34).jpg) ![9](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(45).jpg) ![10](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(19).jpg) ![11](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(22).jpg) ![12](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(18).jpg) ![13](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(31).jpg) ![14](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(12).jpg) ![15](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(44).jpg) ![16](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(41).jpg) ![17](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(15).jpg) ![18](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(13).jpg) ![19](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(4).jpg) ![20](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(10).jpg) ![21](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(6).jpg) ![22](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(32).jpg) ![23](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(14).jpg) ![24](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(16).jpg) ![25](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(2).jpg) ![26](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(27).jpg) ![27](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(7).jpg) ![28](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(20).jpg) ![29](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(11).jpg) ![30](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(35).jpg) ![31](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(30).jpg) ![32](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(42).jpg) ![33](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(33).jpg) ![34](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(46).jpg) ![35](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(23).jpg) ![36](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(43).jpg) ![37](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(5).jpg) ![38](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(38).jpg) ![39](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(28).jpg) ![40](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(9).jpg) ![41](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(47).jpg) ![42](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(8).jpg) ![43](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(40).jpg) ![44](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(17).jpg) ![45](https://huggingface.co/Haruzo/heroes-iii-towns-model/resolve/main/sample_images/a_(21).jpg)
alicenkbaytop/donut-base-sroie
alicenkbaytop
vision-encoder-decoder
15
0
transformers
0
null
true
false
false
mit
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
940
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
anas-awadalla/opt-125-laion-text
anas-awadalla
opt
9
15
transformers
0
text-generation
true
false
false
null
null
['laion/laion2B-en']
null
0
0
0
0
0
0
0
[]
false
true
true
4,893
# Model Card for Model ID An OPT 125m trained on alt-text from LAION 2B. This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
LuisaRomana/clasif-muchocine-roberta
LuisaRomana
xlm-roberta
10
4
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['classification', 'generated_from_trainer']
true
true
true
1,397
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasif-muchocine-roberta This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5146 - Accuracy: 0.3394 ## Model description This model has been made by someone who does NOT understand coding. ## Intended uses & limitations It was made during training, it should not be used. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 388 | 1.5140 | 0.3394 | | 1.5524 | 2.0 | 776 | 1.5132 | 0.3394 | | 1.5336 | 3.0 | 1164 | 1.5146 | 0.3394 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Amiko/Reinforce-Cartpole-v1
Amiko
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
hchiro/PPO-LunarLander-v2
hchiro
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
hanzogak/Lsmith-model
hanzogak
null
70
0
diffusers
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
429
Pre-converted model for ddPn08/Lsmith ============= [ddPn08/Lsmith](https://github.com/ddPn08/Lsmith) ## How to use 1. Download the Pre-converted model and place it in the models folder. 2. Download the Diffusers type model in the org folder of this repository. 3. You must edit the model_index.json of the pre-converted model. Edit the path specified in the part of the models to the absolute path with the Diffusers type model.
Seyfelislem/arabic_whisper_small_version_1
Seyfelislem
whisper
14
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,352
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arabic_whisper_small_version_1 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3325 - Wer: 46.5302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1413 | 0.42 | 1000 | 0.3616 | 49.1672 | | 0.1585 | 0.83 | 2000 | 0.3325 | 46.5302 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Aotsuyu/HogwartLora
Aotsuyu
null
29
0
null
0
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['anime']
false
true
true
6,713
# Hogwart uniforms LoRA [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/0.png" width="800" height="512">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/0.png) A LoRA for Hogwart uniforms, since Hogwarts Legacy renewed people's interest in the franchise. # What to get I am including all epochs, but I've personally had the best results with the 2nd to 4th epochs, which I am renaming to *hogsks-weak*, *hogsks-mid* and *hogsks-hard*. Most models seem to have an idea as to how the uniform looks like so they only need a small push - that's why I suggest starting with ***hogsks-mid***. Only go for higher epoch if you're sure that's what you need. # Invoking I made the token **hogsks**. I also tried to tag each of the images in the dataset with the proper house, so you might have *some* results prompting for ravenclaw, gryffindor, slytherin and hufflepuff, but it's not super reliable.<br> For those using the native implementation of LoRA, remember to also activate it!<br> What I propose as a base prompting template:<br> `hogsks, hogwarts school uniform, black robe, gray vest, slytherin, green tie`<br> ***Color*** emblem and ***color*** scarf also seem to work reasonably well. Adjust the house and colors for the desired house, obviously. This image is made with a very basic prompt: [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/1.png" width="512" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/1.png) <details> <summary>Prompt</summary> <pre> best quality, 1girl, Hogsks, hogwarts school uniform, black cape, gray vest, slytherin, green tie, Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), bad hands, artist name Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3963964880, Size: 512x762, Model: anything-v4.5-pruned, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000003(c945fe615333), AddNet Weight A 1: 0.85, AddNet Weight B 1: 0.85, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp</pre> </details> <br><br> # Previews All the previews have prompts included, so read that! The model I used for Hololive [can be found here](https://huggingface.co/Aotsuyu/Qcha/blob/main/Qcha-hllv1.safetensors). It's a merge I did. [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/2.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/2.png) <details> <summary>Prompt</summary> <pre> (best quality, 1girl, reimu hakurei, brown hair, red eyes, hogsks, hogwarts school uniform, slytherin, black robe, green scarf, perplexed, (gray vest:1.2), gray skirt, red ribbon, outside, snow, black-green robe Negative prompt: 2girls, (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), blue cloak, Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 478121638, Size: 568x768, Model: anything-v4.5-pruned, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95 </pre> </details> [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/3.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/3.png) <details> <summary>Prompt</summary> <pre> best quality, 1girl, flandre scarlet, blonde hair, vampire, fangs, red eyes, hogsks, hogwarts school uniform, hufflepuff, black robe, yellow scarf, (:3:0.5), (gray vest:1.2), gray skirt, outside, snow, black-yellow robe, crystal wings, side ponytail Negative prompt: 2girls, (low quality, worst quality:1.4), (bad anatomy), by bad-artist, bad-hands-5, bad-image-v2-39000, extra digit, fewer digits, (extra arms:1.2), blue cloak, Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 11, Seed: 1055056090, Size: 568x768, Model: anything-v4.5-pruned, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95 </pre></details> [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/4.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/4.png) <details> <summary>Prompt</summary> <pre> best quality, 1girl, gawr gura, (loli:0.5), ravenclaw, hogsks, hogwarts school uniform, black robe, blue scarf, shark teeth, (:3:0.5), (gray vest:1.2), Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by (bad-artist:1.0), bad-hands-5, (bad-image-v2-39000:1.0), extra digit, fewer digits, (extra arms:1.2), Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3622139475, Size: 568x768, Model: Qcha-hllv1, Denoising strength: 0.3, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.95, AddNet Weight B 1: 0.95, Hires upscale: 2, Hires steps: 15, Hires upscaler: 4x-AnimeSharp </pre></details> [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/5.png" width="568" height="768">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/5.png) <details> <summary>Prompt</summary> <pre> best quality, 1girl, black hair, glasses, gryffindor, hogsks, hogwarts school uniform, black robe, red scarf, (scared), (gray vest:1.2), looking at viewer, evening, night, dark Negative prompt: (low quality, worst quality:1.4), (bad anatomy), by (bad-artist:1.0), bad-hands-5, (bad-image-v2-39000:1.0), extra digit, fewer digits, (extra arms:1.2), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2895674484, Size: 568x768, Model: pastelmix-better-vae-fp32, Denoising strength: 0.74, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: hogsksv2-000002(2e60f62c128c), AddNet Weight A 1: 0.9, AddNet Weight B 1: 0.9, Hires upscale: 1.8, Hires steps: 20, Hires upscaler: Latent (nearest-exact) </pre></details> <br><br> # Model comparison This is trained on base NAI so any models off of that should do fine. [<img src="https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/grid.png" width="840" height="964">](https://huggingface.co/Aotsuyu/HogwartLora/resolve/main/images/grid.png) <br> # Contact If you have any questions, you can DM me on [twitter.](https://twitter.com/aojiru_pixiv) My pixiv if you're up for lewds: [Pixiv](https://www.pixiv.net/en/users/12336647)
MichalJ/ppo-SnowballTarget
MichalJ
null
24
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
false
true
true
854
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: MichalJ/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
M331/dqn-SpaceInvadersNoFrameskip-v4
M331
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,206
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga M331 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga M331 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga M331 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
khatkeashish/a2c-AntBulletEnv-v0
khatkeashish
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
352
# **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MyExperiments/ALBARANV2_BASE
MyExperiments
layoutlmv2
6
1
transformers
0
token-classification
true
false
false
cc-by-nc-sa-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,302
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ALBARANV2 This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed #'test_overall_precision': 0.9253731343283582, #'test_overall_recall': 0.9253731343283582, #'test_overall_f1': 0.9253731343283582, #'test_overall_accuracy': 0.9877300613496932, #'test_runtime': 0.5983, #'test_samples_per_second': 11.699, #'test_steps_per_second': 1.671} ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.8.0+cu101 - Datasets 2.9.0 - Tokenizers 0.13.2
96harsh56/roberta-finetuned-subjqa-movies_1110pm
96harsh56
roberta
13
11
transformers
0
question-answering
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
995
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_1110pm This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Alimustoofaa/chatgpt_detector_exam_answer
Alimustoofaa
null
8
5
keras
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
528
## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 0.0010000000474974513 | | decay | 0.0 | | beta_1 | 0.8999999761581421 | | beta_2 | 0.9990000128746033 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 |
DataIntelligenceTeam/vgm_model_2.0
DataIntelligenceTeam
layoutlmv3
16
0
transformers
0
token-classification
true
false
false
cc-by-nc-sa-4.0
null
['sroie']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,102
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vgm_model_0.2 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset. It achieves the following results on the evaluation set: - Loss: 0.0477 - Precision: 0.8 - Recall: 0.7304 - F1: 0.7636 - Accuracy: 0.9935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.33 | 100 | 0.0826 | 0.1538 | 0.0348 | 0.0567 | 0.9783 | | No log | 2.67 | 200 | 0.0633 | 0.4907 | 0.4609 | 0.4753 | 0.9859 | | No log | 4.0 | 300 | 0.0433 | 0.7358 | 0.6783 | 0.7059 | 0.9927 | | No log | 5.33 | 400 | 0.0412 | 0.76 | 0.6609 | 0.7070 | 0.9916 | | 0.0937 | 6.67 | 500 | 0.0390 | 0.6885 | 0.7304 | 0.7089 | 0.9919 | | 0.0937 | 8.0 | 600 | 0.0400 | 0.7177 | 0.7739 | 0.7448 | 0.9914 | | 0.0937 | 9.33 | 700 | 0.0457 | 0.7619 | 0.6957 | 0.7273 | 0.9924 | | 0.0937 | 10.67 | 800 | 0.0370 | 0.7154 | 0.8087 | 0.7592 | 0.9922 | | 0.0937 | 12.0 | 900 | 0.0369 | 0.7759 | 0.7826 | 0.7792 | 0.9945 | | 0.0105 | 13.33 | 1000 | 0.0373 | 0.7672 | 0.7739 | 0.7706 | 0.9940 | | 0.0105 | 14.67 | 1100 | 0.0419 | 0.8190 | 0.7478 | 0.7818 | 0.9940 | | 0.0105 | 16.0 | 1200 | 0.0396 | 0.8018 | 0.7739 | 0.7876 | 0.9945 | | 0.0105 | 17.33 | 1300 | 0.0428 | 0.7568 | 0.7304 | 0.7434 | 0.9940 | | 0.0105 | 18.67 | 1400 | 0.0450 | 0.7522 | 0.7391 | 0.7456 | 0.9940 | | 0.003 | 20.0 | 1500 | 0.0397 | 0.7541 | 0.8 | 0.7764 | 0.9937 | | 0.003 | 21.33 | 1600 | 0.0415 | 0.8349 | 0.7913 | 0.8125 | 0.9948 | | 0.003 | 22.67 | 1700 | 0.0427 | 0.7739 | 0.7739 | 0.7739 | 0.9945 | | 0.003 | 24.0 | 1800 | 0.0455 | 0.7727 | 0.7391 | 0.7556 | 0.9935 | | 0.003 | 25.33 | 1900 | 0.0464 | 0.7830 | 0.7217 | 0.7511 | 0.9932 | | 0.0016 | 26.67 | 2000 | 0.0477 | 0.8 | 0.7304 | 0.7636 | 0.9935 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.2.2 - Tokenizers 0.13.2
khatkeashish/a2c-PandaReachDense-v2
khatkeashish
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MichalJ/ppo-PyramidsRND
MichalJ
null
16
5
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
false
true
true
830
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: MichalJ/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
xiazeng/ppo-SnowballTarget
xiazeng
null
20
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
false
true
true
854
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: xiazeng/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jsacex/vit-base-patch16-224-in21k-finetuned-lora-food101
jsacex
vit
7
1
transformers
0
image-classification
true
false
false
apache-2.0
null
['food101']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,612
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-lora-food101 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.1408 - Accuracy: 0.964 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 9 | 0.5739 | 0.874 | | 2.1968 | 2.0 | 18 | 0.2064 | 0.944 | | 0.3323 | 3.0 | 27 | 0.1521 | 0.96 | | 0.2087 | 4.0 | 36 | 0.1408 | 0.964 | | 0.1678 | 5.0 | 45 | 0.1352 | 0.962 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.12.1
mexa-team/stt_fr_conformer_transducer_large
mexa-team
null
3
4
nemo
0
automatic-speech-recognition
true
false
false
cc-by-4.0
['fr']
['multilingual_librispeech', 'mozilla-foundation/common_voice_7_0', 'VoxPopuli']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
true
true
true
4,900
# NVIDIA Conformer-Transducer Large (fr) (FORK) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--Transducer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-120M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-fr-lightgrey#model-badge)](#datasets) This model was trained on a composite dataset comprising of over 1500 hours of French speech. It is a large size version of Conformer-Transducer (around 120M parameters). See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_fr_conformer_transducer_large") ``` ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_fr_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` ### Input This model accepts 16000 kHz Mono-channel Audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html). ## Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The sentence-piece tokenizers [2] for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ## Datasets All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of over a thousand hours of French speech: - MozillaCommonVoice 7.0 - 356 hours - Multilingual LibriSpeech - 1036 hours - VoxPopuli - 182 hours Both models use same dataset, excluding a preprocessing step to strip hyphen from data for secondary model's training. ## Performance The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general. The latest model obtains the following greedy scores on the following evaluation datasets - 6.85 % on MCV7.0 dev - 7.95 % on MCV7.0 test - 5.05 % on MLS dev - 4.10 % on MLS test Note that these evaluation datasets have been filtered and preprocessed to only contain French alphabet characters and are removed of punctuation outside of hyphenation and apostrophe. ## Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. Further, since portions of the training set contain text from both pre- and post- 1990 orthographic reform, regularity of punctuation may vary between the two styles. For downstream tasks requiring more consistency, finetuning or downstream processing may be required. If exact orthography is not necessary, then using secondary model is advised. ## References - [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) - [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
Mizuiro-sakura/luke-japanese-base-finetuned-jnli
Mizuiro-sakura
luke
13
0
transformers
0
text-classification
true
false
false
mit
['ja']
null
null
0
0
0
0
0
0
0
['luke', 'pytorch', 'transformers', 'jnli', 'natural-language-inference', 'NaturalLanguageInference']
false
true
true
2,585
# このモデルはluke-japanese-baseをファインチューニングして、JNLI(文章の関係性判別)に用いれるようにしたものです。 このモデルはluke-japanese-baseを yahoo japan/JGLUEのJNLI( https://github.com/yahoojapan/JGLUE ) を用いてファインチューニングしたものです。 文章の関係性(矛盾 contradiction, 中立 neutral, 含意 entailment)を計算するタスクに用いることができます。 # This model is fine-tuned model for JNLI which is based on luke-japanese-base This model is fine-tuned by using yahoo japan JGLUE JNLI dataset. You could use this model for calculating natural language inference. # モデルの精度 accuracy of model モデルの精度(正答率)は 0.8976992604765818 # How to use 使い方 transformers, sentencepieceをinstallして、以下のコードを実行することで、JNLI(文章の関係性判別)タスクを解かせることができます。 please execute this code. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer=AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-jnli') model=AutoModelForSequenceClassification.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-jnli') token=tokenizer.encode('時計がついている場所にパブリックマーケットセンターとかかれた看板が設置されています。', '屋根の上に看板があり時計もついています。') result=model(torch.tensor(token).unsqueeze(0)) max_index=torch.argmax(result.logits) if max_index==0: print('contradiction') elif max_index==1: print('neutral') elif max_index==2: print('entailment') ``` # what is Luke? Lukeとは?[1] LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。 # Acknowledgments 謝辞 Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia. # Citation [1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
xiazeng/PyramidsRND
xiazeng
null
16
0
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
false
true
true
829
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: xiazeng/PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
quaizarv/Reinforce-CartPole
quaizarv
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Xian-Xiang/my_awesome_wnut_model
Xian-Xiang
distilbert
12
2
transformers
0
token-classification
true
false
false
apache-2.0
null
['wnut_17']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,445
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2777 - Precision: 0.4880 - Recall: 0.2641 - F1: 0.3428 - Accuracy: 0.9395 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2870 | 0.3758 | 0.2132 | 0.2720 | 0.9360 | | No log | 2.0 | 426 | 0.2777 | 0.4880 | 0.2641 | 0.3428 | 0.9395 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
michelecafagna26/gpt2-medium-finetuned-sst2-sentiment
michelecafagna26
gpt2
8
0
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['sst2']
null
0
0
0
0
0
0
0
['text-classification']
false
true
true
1,636
# GPT-2-medium fine-tuned for Sentiment Analysis 👍👎 [OpenAI's GPT-2](https://openai.com/blog/tags/gpt-2/) medium fine-tuned on [SST-2](https://huggingface.co/datasets/st2) dataset for **Sentiment Analysis** downstream task. ## Details of GPT-2 The **GPT-2** model was presented in [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) by *Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever* ## Model fine-tuning 🏋️‍ The model has been finetuned for 10 epochs on standard hyperparameters ## Val set metrics 🧾 |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |negative | 0.92 | 0.92| 0.92| 428 | |positive | 0.92 | 0.93| 0.92| 444 | |----------|----------|---------|----------|-------| |accuracy| | | 0.92| 872 | |macro avg| 0.92| 0.92| 0.92| 872 | |weighted avg| 0.92| 0.92| 0.92| 872 | ## Model in Action 🚀 ```python from transformers import GPT2Tokenizer, GPT2ForSequenceClassification tokenizer = GPT2Tokenizer.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment") model = GPT2ForSequenceClassification.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment") inputs = tokenizer("I love it", return_tensors="pt") model(**inputs).logits.argmax(axis=1) # 1: Positive, 0: Negative # Output: tensor([1]) ``` > This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488
oscarb92/a2c-AntBulletEnv-v0
oscarb92
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
352
# **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MyExperiments/AlbaranV3_BASE
MyExperiments
layoutlmv3
12
0
transformers
0
token-classification
true
false
false
cc-by-nc-sa-4.0
null
['sroie']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,181
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AlbaranV3 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset. It achieves the following results on the evaluation set: - Loss: 0.0788 - Precision: 0.9191 - Recall: 0.9328 - F1: 0.9259 - Accuracy: 0.9893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 8.33 | 100 | 0.2060 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | No log | 16.67 | 200 | 0.0691 | 0.9403 | 0.9403 | 0.9403 | 0.9908 | | No log | 25.0 | 300 | 0.0707 | 0.9254 | 0.9254 | 0.9254 | 0.9893 | | No log | 33.33 | 400 | 0.0737 | 0.9191 | 0.9328 | 0.9259 | 0.9893 | | 0.196 | 41.67 | 500 | 0.0775 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | 0.196 | 50.0 | 600 | 0.0774 | 0.9403 | 0.9403 | 0.9403 | 0.9893 | | 0.196 | 58.33 | 700 | 0.0877 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | 0.196 | 66.67 | 800 | 0.0836 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | 0.196 | 75.0 | 900 | 0.0793 | 0.9191 | 0.9328 | 0.9259 | 0.9893 | | 0.0069 | 83.33 | 1000 | 0.0788 | 0.9191 | 0.9328 | 0.9259 | 0.9893 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.2.2 - Tokenizers 0.13.2
figfig/restaurant_HSR_test
figfig
whisper
12
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,460
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # restaurant_HSR_test This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3461 - Wer: 50.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 10.0 | 10 | 7.3374 | 133.3333 | | No log | 20.0 | 20 | 2.1528 | 33.3333 | | 6.4843 | 30.0 | 30 | 1.4666 | 16.6667 | | 6.4843 | 40.0 | 40 | 1.3461 | 50.0 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.11.0+cu115 - Datasets 2.9.0 - Tokenizers 0.13.2
mitra-mir/setfit-model-Feb11-Misinformation-on-Law
mitra-mir
mpnet
13
8
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jmcneves/dqn-SpaceInvadersNoFrameskip-v4
jmcneves
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,217
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmcneves -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmcneves -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jmcneves ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
xiazeng/a2c-AntBulletEnv-v0
xiazeng
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
352
# **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
samkenxstream/AlgoSilicon
samkenxstream
null
2
0
adapter-transformers
0
feature-extraction
false
false
false
apache-2.0
['an']
['glue', 'fka/awesome-chatgpt-prompts']
null
0
0
0
0
0
0
0
['code', 'biology', 'finance']
false
true
true
4,906
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
yl131/ppo-LunarLander-v2
yl131
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
IlluminatiAI/Illuminati_Diffusion_v1.0
IlluminatiAI
null
24
108
diffusers
26
text-to-image
false
false
false
openrail++
['en']
null
null
0
0
0
0
0
0
0
['art']
false
true
true
3,519
![image](https://raw.githubusercontent.com/caacoe/ID_resource/ae7c7cf72526bc89a4bdc027877badf5d783ba05/CaitlinCross.png) <sub>studio photo closeup portrait victorian (woman1-420:1.3) with blue eyes and red hair wearing intricate silver metal crystal medieval armour (sitting inside a castle:1.3), black victorian attire, rembrandt light, zbrush, (black background:1.7), glossy, rtx, reflections, soft light, soft shadows, dramatic lighting, atmospheric, global illumination, unreal, octane, (two tone lighting:1.5), (cyan light:1.4), alphonse mucha, bokeh Negative prompt: nfixernext, nfixer, nfixernext, nfixer, nfixernext, nfixer,hands, arms, illustration, fake, cgi, drawing, miniature, blocky, angular, glasses, (large eyes:1.3), freckles, face paint, mask, glasses, tattoos Steps: 120, Sampler: Euler a, CFG scale: 3, Seed: 201306749, Size: 1024x1024, Model hash: 639d0db70f, Denoising strength: 0.3, ENSD: 3, Mask blur: 4, SD upscale overlap: 64, SD upscale upscaler: LDSR</sub> # Illuminati Diffusion v1.0 Illuminati Diffusion is a latent text-to-image diffusion model that has been conditioned on high aesthetic synthetic images through fine-tuning. It was trained on 82,000 images locally on my PC with a single 3090ti, taking over 100 hours. - [Illuminati Diffusion v1.0 Safetensors](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/blob/main/illuminati_diffusion_v1.0.safetensors): The model file. - [Illuminati Diffusion v1.0 Inference Config](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/raw/main/illuminati_diffusion_v1.0.yaml): A file included to allow for inference with Automatic's WebUI and with the original Stable Diffusion codebase. (right click > save target as/link as) - [Illuminati Diffusion v1.0 supplementary TI embeddings](https://huggingface.co/IlluminatiAI/Illuminati_Diffusion_v1.0/tree/main/embeds): A series of both positive and negative embeds. nfixer is recommended for all gens. ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) Note - Hosted inference API does not work as my model uses safetensors in the diffusers model, it doesn't appear to be compatible with Hugging Face's API, however the model does work correctly for any software which supports this. If you enjoy this model, perhaps you might consider supporting me on [Patreon ](https://patreon.com/user?u=55366974). [![Patreon](https://github.com/caacoe/ID_resource/blob/main/patreon-logo.png?raw=true)](https://patreon.com/user?u=55366974) In order to reach us, you can join our [Discord server](https://discord.gg/HqdffGgeBa). [![Discord Server](https://github.com/caacoe/ID_resource/blob/main/invite.png?raw=true)](https://discord.gg/HqdffGgeBa) Follow me on my [Twitter page](https://twitter.com/cac0e).
MyExperiments/AlbaranV3_Large
MyExperiments
layoutlmv3
12
0
transformers
0
token-classification
true
false
false
cc-by-nc-sa-4.0
null
['sroie']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,189
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AlbaranV3_Large This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on the sroie dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Precision: 0.9104 - Recall: 0.9104 - F1: 0.9104 - Accuracy: 0.9862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 8.33 | 100 | 0.0688 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | No log | 16.67 | 200 | 0.0839 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | No log | 25.0 | 300 | 0.0900 | 0.9403 | 0.9403 | 0.9403 | 0.9893 | | No log | 33.33 | 400 | 0.0949 | 0.9254 | 0.9254 | 0.9254 | 0.9877 | | 0.0733 | 41.67 | 500 | 0.1077 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | | 0.0733 | 50.0 | 600 | 0.1028 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | | 0.0733 | 58.33 | 700 | 0.1022 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | | 0.0733 | 66.67 | 800 | 0.1103 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | | 0.0733 | 75.0 | 900 | 0.1084 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | | 0.0006 | 83.33 | 1000 | 0.1080 | 0.9104 | 0.9104 | 0.9104 | 0.9862 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.2.2 - Tokenizers 0.13.2
Ryosei0304/LunarLander-v2
Ryosei0304
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Amiko/Reinforce-PixelCopter
Amiko
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
GowriKumar/cat_or_dog
GowriKumar
null
4
0
fastai
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['fastai']
false
true
true
736
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
xiazeng/a2c-PandaReachDense-v2
xiazeng
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Deysi/clasificador-resenas-amazon
Deysi
electra
10
3
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['classification', 'generated_from_trainer']
true
true
true
1,371
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-resenas-amazon This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0450 - Accuracy: 0.498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7663 | 1.0 | 2500 | 1.2081 | 0.528 | | 0.5641 | 2.0 | 5000 | 1.4974 | 0.516 | | 0.3543 | 3.0 | 7500 | 2.0450 | 0.498 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
sohm/a2c-PandaReachDense-v2
sohm
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ryosei0304/ppo-Huggy
Ryosei0304
null
32
6
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
false
true
true
821
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Ryosei0304/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Duskfallcrew/isometric-dreams-sd-1-5
Duskfallcrew
null
22
16
diffusers
3
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
2
0
2
0
0
0
0
['text-to-image', 'isometric', 'art', 'stable diffusion', 'stable diffusion 1.5', 'duskfallcrew']
false
true
true
1,215
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Duskfallcrew/isometric-dreams-sd-1-5) ### Isometric Dreams SD 1.5 trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! # All samples and info are here: https://civitai.com/user/duskfallcrew # If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew # If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk duskametrick15 (use that on your prompt)
ArneL2206/poca-SoccerTwos
ArneL2206
null
20
22
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
843
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: ArneL2206/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
oscarb92/a2c-PandaReachDense-v2
oscarb92
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LOGQS/ppo-LunarLander-v2
LOGQS
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
chandc/ppo-LunarLander-v2
chandc
null
12
1
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
achiachi/accentcombinedlenous8ktq9-accent-classification
achiachi
null
4
0
sklearn
0
tabular-classification
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['tabular-classification', 'baseline-trainer']
false
true
true
8,760
## Baseline Model trained on accentcombinedlenous8ktq9 to apply classification on accent **Metrics of the best model:** accuracy 0.947980 recall_macro 0.749094 precision_macro 0.622545 f1_macro 0.656714 Name: LogisticRegression(C=1, class_weight='balanced', max_iter=1000), dtype: float64 **See model plot below:** <style>#sk-container-id-5 {color: black;background-color: white;}#sk-container-id-5 pre{padding: 0;}#sk-container-id-5 div.sk-toggleable {background-color: white;}#sk-container-id-5 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-5 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-5 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-5 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-5 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-5 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-5 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-5 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-5 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-5 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-5 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-5 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-5 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-5 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-5 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-5 div.sk-item {position: relative;z-index: 1;}#sk-container-id-5 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-5 div.sk-item::before, #sk-container-id-5 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-5 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-5 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-5 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-5 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-5 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-5 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-5 div.sk-label-container {text-align: center;}#sk-container-id-5 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-5 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-5" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless word False False False ... False True False kana False False False ... False True False kind False False False ... False False False morae False False False ... False False False pos False False False ... False False False etym False False False ... False False False jilen False False False ... False False False kanalen False False False ... False False False[8 rows x 7 columns])),(&#x27;logisticregression&#x27;,LogisticRegression(C=1, class_weight=&#x27;balanced&#x27;,max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-13" type="checkbox" ><label for="sk-estimator-id-13" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless word False False False ... False True False kana False False False ... False True False kind False False False ... False False False morae False False False ... False False False pos False False False ... False False False etym False False False ... False False False jilen False False False ... False False False kanalen False False False ... False False False[8 rows x 7 columns])),(&#x27;logisticregression&#x27;,LogisticRegression(C=1, class_weight=&#x27;balanced&#x27;,max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-14" type="checkbox" ><label for="sk-estimator-id-14" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless word False False False ... False True False kana False False False ... False True False kind False False False ... False False False morae False False False ... False False False pos False False False ... False False False etym False False False ... False False False jilen False False False ... False False False kanalen False False False ... False False False[8 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-15" type="checkbox" ><label for="sk-estimator-id-15" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=1, class_weight=&#x27;balanced&#x27;, max_iter=1000)</pre></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt
Vadermusic/playingaround
Vadermusic
null
11
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,972
# 🍌 Stable Diffusion WebUI for banana (Stable Diffusion 1.5) Deploy an API for AUTOMATIC1111's [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to generate images with **Stable Diffusion 1.5**. Supports features not available in other Stable Diffusion templates, such as: * [Prompt emphasis](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis) * [Prompt editing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) * [Unlimited prompt length](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#infinite-prompt-length) This deployment provides an API only and does not include the WebUI's user interface. Please report any issues you encounter. ## Instant Deploy [See how to deploy in seconds](https://app.banana.dev/templates/patienceai/stable-diffusion-1.5-automatic1111). ## Model Inputs ### txt2img example ``` { "endpoint": "txt2img", "params": { "prompt": "an astronaut riding a (horse:motorcycle:0.5) on the moon", "negative_prompt": "cartoonish, low quality", "steps": 25, "sampler_name": "Euler a", "cfg_scale": 7.5, "seed": 42, "batch_size": 1, "n_iter": 1, "width": 512, "height": 512, "tiling": false } } ``` (Only `prompt` is required.) Output: ``` { "images": [ "<base64 image>" ] } ``` ### img2img example ``` { "endpoint": "img2img", "params": { "prompt": "an astronaut riding a horse on the moon in anime style", "negative_prompt": "cartoonish, low quality", "steps": 25, "sampler_name": "Euler a", "cfg_scale": 7.5, "denoising_strength": 0.7, "seed": 42, "batch_size": 1, "n_iter": 1, "width": 512, "height": 512, "tiling": false "init_images": [ "<base64 image>" ] } } ``` (Only `prompt` and `init_images` are required.) Output: ``` { "images": [ "<base64 image>" ] } ```
Tirendaz/finetuning-emotion-model
Tirendaz
distilbert
16
0
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,327
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-emotion-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2238 - Accuracy: 0.9205 - F1: 0.9204 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3235 | 0.9035 | 0.9003 | | 0.5384 | 2.0 | 500 | 0.2238 | 0.9205 | 0.9204 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
mitra-mir/setfit-model-Feb11-Misinformation-on-Govt
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Deysi/clasificador-resenas-amazon2
Deysi
roberta
11
7
transformers
0
text-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['classification', 'generated_from_trainer']
true
true
true
1,422
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clasificador-resenas-amazon2 This model is a fine-tuned version of [mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243](https://huggingface.co/mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1794 - Accuracy: 0.562 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0154 | 1.0 | 2500 | 1.0807 | 0.566 | | 0.8723 | 2.0 | 5000 | 1.0567 | 0.568 | | 0.6942 | 3.0 | 7500 | 1.1794 | 0.562 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
cedwin/tsdae-model
cedwin
bert
12
44
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,220
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 32295 with parameters: ``` {'batch_size': 6, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 3e-05 }, "scheduler": "constantlr", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Lorius2/q-FrozenLake-v1-4x4-noSlippery
Lorius2
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
396
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Lorius2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Lorius2/rl-unit2-taxiv3
Lorius2
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
369
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Lorius2/rl-unit2-taxiv3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mitra-mir/setfit-model-Feb11-Misinformation-on-Media-Traditional-Social
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
deprem-ml/deprem-loodos-bert-base-uncased
deprem-ml
bert
49
0
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
5
2
3
0
0
0
0
[]
false
true
true
2,237
### Deprem NER Training Results ``` precision recall f1-score support 0 0.85 0.91 0.88 734 1 0.77 0.84 0.80 207 2 0.71 0.88 0.79 130 3 0.68 0.76 0.72 94 4 0.80 0.85 0.82 362 5 0.63 0.59 0.61 112 6 0.73 0.82 0.77 108 7 0.55 0.77 0.64 78 8 0.65 0.71 0.68 31 9 0.70 0.85 0.76 117 micro avg 0.77 0.85 0.81 1973 macro avg 0.71 0.80 0.75 1973 weighted avg 0.77 0.85 0.81 1973 samples avg 0.82 0.87 0.83 1973 ``` ### Preprocessing Funcs ``` tr_stopwords = stopwords.words('turkish') tr_stopwords.append("hic") tr_stopwords.append("dm") tr_stopwords.append("vs") tr_stopwords.append("ya") def remove_punct(tok): tok = re.sub(r'[^\w\s]', '', tok) return tok def normalize(tok): if tok.isdigit(): tok = "digit" return tok def clean(tok): tok = remove_punct(tok) tok = normalize(tok) return tok def exceptions(tok): if not tok.isdigit() and len(tok)==1: return False if not tok: return False if tok in tr_stopwords: return False if tok.startswith('#') or tok.startswith("@"): return False return True sm_tok = lambda text: [clean(tok) for tok in text.split(" ") if exceptions(tok)] ``` ### Other HyperParams ``` training_args = TrainingArguments( output_dir="./output", evaluation_strategy="epoch", per_device_train_batch_size=32, per_device_eval_batch_size=32, weight_decay=0.01, report_to=None, num_train_epochs=4 ) ``` ``` class_weights[0] = 1.0 class_weights[1] = 1.5167249178108022 class_weights[2] = 1.7547338578655642 class_weights[3] = 1.9610520059358458 class_weights[4] = 1.269341370129623 class_weights[5] = 1.8684086209021484 class_weights[6] = 1.8019018017117145 class_weights[7] = 2.110648663094536 class_weights[8] = 3.081208739200435 class_weights[9] = 1.7994815143101963 ``` Threshold: 0.25 ```
calvincbzhang/ppo-LunarLander-v2
calvincbzhang
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mitra-mir/setfit-model-Feb11-Misinformation-on-Organizations-GoFundMe-WEF
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Deysi/mt5-small-sumarizacion-es
Deysi
mt5
9
3
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,641
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Deysi/mt5-small-sumarizacion-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0076 - Validation Loss: 1.8152 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 76288, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.0639 | 2.3192 | 0 | | 2.6021 | 2.0832 | 1 | | 2.3235 | 1.9546 | 2 | | 2.1939 | 1.8930 | 3 | | 2.1122 | 1.8559 | 4 | | 2.0598 | 1.8318 | 5 | | 2.0272 | 1.8236 | 6 | | 2.0076 | 1.8152 | 7 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
mitra-mir/setfit-model-Feb11-Misinformation-on-Trudeau
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mitra-mir/setfit-model-Feb11-Misinformation-on-Mandates-Public-Health
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mitra-mir/setfit-model-Feb11-Misinformation-on-Convoy
mitra-mir
mpnet
13
17
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mitra-mir/setfit-model-Feb11-Miscellaneous-Misinformation
mitra-mir
mpnet
13
5
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mitra-mir/setfit-model-Feb11-Misinformation-on-Numbers-attendance-support-etc
mitra-mir
mpnet
13
5
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
harryhoch/ppo-LunarLander-v2-20230211
harryhoch
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NaLto/ObraMaximaTests
NaLto
null
2
0
null
0
null
false
false
false
null
null
null
null
1
0
1
0
0
0
0
[]
false
false
true
4,907
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
sohm/a2c-PandaReachDense-v2-v2
sohm
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pittawat/a2c-PandaReachDense-v2-v3
pittawat
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Anis12/ppo-LunarLander-v2
Anis12
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Nyaaneet/donut-base-ru
Nyaaneet
vision-encoder-decoder
11
0
transformers
0
null
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
908
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-ru This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AntiSquid/2023-ppo-LunarLander-v2
AntiSquid
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mingdinghan/ppo-LunarLander-v2
mingdinghan
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mitra-mir/setfit-model-Feb11-Misinformation-on-Global-Support
mitra-mir
mpnet
13
4
sentence-transformers
0
sentence-similarity
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,138
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 201 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 201, "warmup_steps": 21, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
sohm/a2c-PandaReachDense-v2-v3
sohm
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/asankhaya
huggingtweets
gpt2
11
0
transformers
0
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['huggingtweets']
false
true
true
3,346
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/718660433834434560/QgG0kEz3_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Asankhaya Sharma</div> <div style="text-align: center; font-size: 14px;">@asankhaya</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Asankhaya Sharma. | Data | Asankhaya Sharma | | --- | --- | | Tweets downloaded | 3176 | | Retweets | 1061 | | Short tweets | 21 | | Tweets kept | 2094 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jqhfrxfq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @asankhaya's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ag0308me) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ag0308me/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/asankhaya') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
0RisingStar0/HighRiseMixV2
0RisingStar0
null
9
0
diffusers
6
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
true
true
2,003
<center><b>HighRiseMixV2.5</b></center> <p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00733-2938506110-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png"> <img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00729-221520444-(masterpiece%2C%20best%20quality%2C%20excellent%20quality)%2C%20((1girl%2C%20solo))%2C%20(gradient%20pink%20eye%2C%20black%20hair%2C%20short%20hair%2C%20school%20uniform%2C%20mic.png"></p> <center><b>HighRiseMixV2</b></center> <p align="center"><img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00016-3185527639-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png"> <img src="https://huggingface.co/0RisingStar0/HighRiseMixV2/resolve/main/00021-3185527644-(masterpiece%2C%20excellent%20quality%2C%20high%20quality)%2C%20(1girl%2C%20solo%2C%20cowboy%20shot)%2C%20looking%20at%20viewer%2C%20sky%2C%20city%2C%20skyscrapers%2C%20pavement%2C.png"></p> U-Net mixed model <b>specialized for city and skyscrapers background.</b> <b>FP16 Pruned version</b>(No EMA). (Quality change may occur in very small details on buildings' textures) <b>V2 Update Log : </b> Added models : AikimixPv1.0, Counterfeit V2.0, pastelmix-better-vae Adjusted character style(more cute, anime style) <b>V2.5 Update Log : </b> Added models : AikimixCv1.5 Just some very little changes to textures adjusted to my taste. It doesn't matter which one to use. There are pros and cons between V2 and V2.5 so you can just use what you want. <b>Recommended prompts : </b> (masterpiece, best quality, excellent quality), ((1girl, solo)), sky, city, (skyscrapers), trees, pavement, lens flare EasyNegative, moss, phone, man, pedestrians, extras, border, outside border, white border (EasyNegative is a negative embedding : https://huggingface.co/datasets/gsdf/EasyNegative) <b>Recommended settings : </b> Sampler : DPM++ 2M Karras OR DPM++ SDE Karras Sampling steps : 25 ~ 30 Resolution : 512x768 OR 768x512 CFG Scale : 9 <b> Upscale is a must-do!! </b> Otherwise, you won't get intended results. Upscaler : Latent (nearest) Hires steps : 0 Denoise : 0.6 Upscale 2x <b>Recommended VAEs : </b> kl-f8-anime2 orangemix.vae.pt <b> Mixed models : </b> AbyssOrangeMix2_NSFW, AnythingV4.5, AikimiXPv1.0, BasilMixFixed, Counterfeit V2.0, CounterfeitV2.5, EerieOrangeMix2, pastelmix-better-vae, PowercolorV2 (Thanks to everyone who made above models!) This is my first mixed model being uploaded to public site, so feel free to give feedbacks as you wish, I'll try and work around with it.
c-q/dqn-SpaceInvadersNoFrameskip-v4
c-q
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,203
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c-q -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c-q -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga c-q ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_rte_192
gokuls
distilbert
17
0
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,944
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_data_aug_rte_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.5485 - Accuracy: 0.5199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.348 | 1.0 | 568 | 0.5499 | 0.4874 | | 0.2888 | 2.0 | 1136 | 0.5640 | 0.4982 | | 0.2849 | 3.0 | 1704 | 0.5618 | 0.5199 | | 0.2833 | 4.0 | 2272 | 0.5618 | 0.5018 | | 0.2823 | 5.0 | 2840 | 0.5610 | 0.5090 | | 0.2816 | 6.0 | 3408 | 0.5485 | 0.5199 | | 0.281 | 7.0 | 3976 | 0.5527 | 0.5126 | | 0.2805 | 8.0 | 4544 | 0.5578 | 0.5054 | | 0.2798 | 9.0 | 5112 | 0.5575 | 0.5343 | | 0.2796 | 10.0 | 5680 | 0.5533 | 0.5199 | | 0.2793 | 11.0 | 6248 | 0.5534 | 0.5090 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
SebastianS/dqn-SpaceInvadersNoFrameskip-v4-100000_n_steps
SebastianS
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,222
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SebastianS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SebastianS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SebastianS ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
lmqg/flan-t5-base-squad-ae
lmqg
t5
13
6
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['en']
['lmqg/qg_squad']
null
0
0
0
0
0
0
0
['answer extraction']
true
true
true
4,365
# Model Card of `lmqg/flan-t5-base-squad-ae` This model is fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/flan-t5-base-squad-ae") # model prediction answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-base-squad-ae") output = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-base-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 58.16 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 69.41 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.56 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 52.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 48.02 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 44.15 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 43.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 81.79 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 68.88 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: ['ae'] - model: google/flan-t5-base - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-base-squad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
gatardochi/a2c-AntBulletEnv-v0
gatardochi
null
13
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['AntBulletEnv-v0', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
352
# **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
euphoricpenguin22/3DVaporwave
euphoricpenguin22
null
6
0
null
0
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
550
# 3DVaporwave A Dreambooth model based on Stable Diffusion 1.5. The keyword for the model is `threedvaporstyle`, which should be sufficient for most generations. Semantically, it can be helpful to treat the keyword as a style descriptor. I also find that using descriptions to indicate that the image is a render can increase the likelihood that it will generate in the style that you want. ![](https://huggingface.co/euphoricpenguin22/3DVaporwave/blob/main/Sphere.png) ![](https://huggingface.co/euphoricpenguin22/3DVaporwave/blob/main/Window.png)
hectorjelly/Ren_and_Stimpy
hectorjelly
null
23
9
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
844
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: hectorjelly/Ren_and_Stimpy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
pfunk/CartPole-v1-DQN_baseline-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,777
# (CleanRL) **DQN** Agent Playing **CartPole-v1** This is a trained model of a DQN agent playing CartPole-v1. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQN_baseline.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQN_baseline]" python -m cleanrl_utils.enjoy --exp-name DQN_baseline --env-id CartPole-v1 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/dqn.py curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/CartPole-v1-DQN_baseline-seed1/raw/main/poetry.lock poetry install --all-extras python dqn.py --exp-name DQN_baseline --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id CartPole-v1 --seed 1 --total-timesteps 100000 ``` # Hyperparameters ```python {'batch_size': 128, 'buffer_size': 10000, 'capture_video': False, 'cuda': True, 'end_e': 0.05, 'env_id': 'CartPole-v1', 'exp_name': 'DQN_baseline', 'exploration_fraction': 0.5, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.00025, 'learning_starts': 10000, 'save_model': True, 'seed': 1, 'start_e': 1, 'target_network_frequency': 500, 'tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 100000, 'track': True, 'train_frequency': 10, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
amoselberg/Reinforce-cartpole
amoselberg
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
gatardochi/a2c-PandaReachDense-v2
gatardochi
null
13
0
stable-baselines3
1
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
grullborg/kamiya_yuuStyle
grullborg
null
3
0
null
0
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image', 'lora']
false
true
true
1,699
# Kamiya Yuu Style LoRA ## Usage To use this LoRA you have to download the file, as well as drop it into the "\stable-diffusion-webui\models\Lora" folder To use it in a prompt, please refer to the extra networks panel in your Automatic1111 webui. I highly recommend using it at around 0.8 strength for the best results. If you'd like to support the amazing artist on whose work this LoRA was trained, I'd highly recommend you check out [Kamiya Yuu](https://twitter.com/yuukamiya68?lang=en). Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/96fultD.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/y66xA99.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/btwOjyJ.png width=50% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
mili7522/Pixelcopter-PLE-v0
mili7522
null
6
0
null
1
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
vishalghor/t5-small-finetuned-wikisql-sql-nl-nl-sql
vishalghor
t5
9
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,533
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-wikisql-sql-nl-nl-sql This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2194 - Bleu: 40.1315 - Gen Len: 16.7069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.2713 | 1.0 | 8097 | 0.2303 | 39.3173 | 16.7176 | | 0.2549 | 2.0 | 16194 | 0.2194 | 40.1315 | 16.7069 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
amoselberg/Reinforce-copter
amoselberg
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
grullborg/syrohStyle
grullborg
null
3
0
null
1
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image', 'lora']
false
true
true
1,694
# Syroh Style LoRA ## Usage To use this LoRA you have to download the file, as well as drop it into the "\stable-diffusion-webui\models\Lora" folder To use it in a prompt, please refer to the extra networks panel in your Automatic1111 webui. I highly recommend using it at around 0.4 to 0.6 strength for the best results. If you'd like to support the amazing artist on whose work this LoRA was trained, I'd highly recommend you check out [Syroh](https://www.pixiv.net/en/users/323340). Have fun :) ## Example Pictures <table> <tr> <td><img src=https://i.imgur.com/2aiatls.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/HWMhTUt.png width=50% height=100%/></td> </tr> <tr> <td><img src=https://i.imgur.com/hBelYEF.png width=50% height=100%/></td> </tr> </table> ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
UchihaMadara/model1-thesis-4
UchihaMadara
bert
12
5
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,700
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model1-thesis-4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1362 - Precision: 0.4257 - Recall: 0.4678 - F1: 0.4458 - Accuracy: 0.6453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 45 | 1.1491 | 0.2860 | 0.4992 | 0.3637 | 0.5491 | | No log | 2.0 | 90 | 1.0264 | 0.3661 | 0.4334 | 0.3969 | 0.6192 | | No log | 3.0 | 135 | 1.0848 | 0.3885 | 0.4455 | 0.4150 | 0.6284 | | No log | 4.0 | 180 | 1.1257 | 0.4100 | 0.4896 | 0.4462 | 0.6408 | | No log | 5.0 | 225 | 1.1362 | 0.4257 | 0.4678 | 0.4458 | 0.6453 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
SebastianS/dqn-SpaceInvadersNoFrameskip-v4
SebastianS
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,223
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SebastianS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SebastianS -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SebastianS ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```