repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
MultiversexPeeps/duskfall-s-artificial-realism
MultiversexPeeps
null
69
21
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
7,010
### Duskfall's Artificial Realism Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk Sample pictures of: reaisp (use that on your prompt) ![reaisp 0](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%281%29.jpg)![reaisp 1](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%282%29.jpg)![reaisp 2](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%283%29.jpg)![reaisp 3](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%284%29.jpg)![reaisp 4](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%285%29.jpg)![reaisp 5](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%286%29.jpg)![reaisp 6](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%287%29.jpg)![reaisp 7](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%288%29.jpg)![reaisp 8](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%289%29.jpg)![reaisp 9](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2810%29.jpg)![reaisp 10](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2811%29.jpg)![reaisp 11](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2812%29.jpg)![reaisp 12](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2813%29.jpg)![reaisp 13](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2814%29.jpg)![reaisp 14](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2815%29.jpg)![reaisp 15](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2816%29.jpg)![reaisp 16](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2817%29.jpg)![reaisp 17](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2818%29.jpg)![reaisp 18](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2819%29.jpg)![reaisp 19](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2820%29.jpg)![reaisp 20](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2821%29.jpg)![reaisp 21](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2822%29.jpg)![reaisp 22](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2823%29.jpg)![reaisp 23](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2824%29.jpg)![reaisp 24](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2825%29.jpg)![reaisp 25](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2826%29.jpg)![reaisp 26](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2827%29.jpg)![reaisp 27](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2828%29.jpg)![reaisp 28](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2829%29.jpg)![reaisp 29](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2830%29.jpg)![reaisp 30](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2831%29.jpg)![reaisp 31](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2832%29.jpg)![reaisp 32](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2833%29.jpg)![reaisp 33](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2834%29.jpg)![reaisp 34](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2835%29.jpg)![reaisp 35](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2836%29.jpg)![reaisp 36](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2837%29.jpg)![reaisp 37](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2838%29.jpg)![reaisp 38](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2839%29.jpg)![reaisp 39](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2840%29.jpg)![reaisp 40](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2841%29.jpg)![reaisp 41](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2842%29.jpg)![reaisp 42](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2843%29.jpg)![reaisp 43](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2844%29.jpg)![reaisp 44](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2845%29.jpg)![reaisp 45](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2846%29.jpg)![reaisp 46](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2847%29.jpg)![reaisp 47](https://huggingface.co/Duskfallcrew/duskfall-s-artificial-realism/resolve/main/concept_images/reaisp_%2848%29.jpg)
Kilgori/inisanium-model
Kilgori
null
35
9
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
3,225
### Inisanium-Model Dreambooth model trained by Kilgori with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) !!!IMPORTANT!!! Really NSFW Model use at your own risk! This model can generate anything from furry futanari nsfw to kinda realistic human women. This model uses tags and you will be able to see the captions used if you download the captions zip. Its an unstable model. Clip skip changes the results substantially so use as you wish. Hard to use, bigger prompts = better images usually. It doesn't do sfw furries in my experience. DM me on discord if you want access to the images used to train this model. Kilgorio#6392 😁 Sample pictures of this concept: ![0](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00005-3643574537-girl,_realistic.png) ![1](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00011-3643574537-Rain,_day,_foggy,_horror.png) ![2](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00020-3643574537-(masterpiece_1,2),_best_quality,_masterpiece,_highres,_original,_extremely_detailed_wallpaper,_looking_at_viewer,_(sitting_1.4),.png) ![3](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00018-3643574537-Woman,_street,_city,_market,_happy.png) ![4](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00008-3643574537-Mountain,_night,_Lights.png) ![5](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00021-3643574537-((masterpiece)),_best_quality,_perfect_anatomy,_(1girl,_solo_focus_1.4),_pov,_looking_at_viewer,_flower_trim,(perspective,_sidew.png) ![6](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00003-3643574537-girl,_realistic.png) ![7](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00012-3643574537-Rain,_day,_foggy.png) ![8](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00009-3643574537-Mountain,_night,_Lights.png) ![9](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00023-3643574537-((masterpiece)),_best_quality,_perfect_anatomy,_(1girl,_solo_focus_1.4),_pov,_looking_at_viewer,_flower_trim,(perspective,_sidew.png) ![10](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00001-3643574537-girl.png) ![11](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00030-3643574537-girl.png) ![12](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00013-3643574537-Rain,_day,_foggy,_horror.png) ![13](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00017-3643574537-Woman,_street,_city,_market,_happy.png) ![14](https://huggingface.co/Kilgori/inisanium-model/resolve/main/sample_images/00027-3643574537-girl.png)
pfunk/Pong-v4-DQPN_p30_e0.10-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,989
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_e0.10.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p30_e0.10]" python -m cleanrl_utils.enjoy --exp-name DQPN_p30_e0.10 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.10-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.10-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.10-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p30_e0.10 --start-policy-f 30000 --end-policy-f 1000 --evaluation-fraction 0.10 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 1000, 'env_id': 'Pong-v4', 'evaluation_fraction': 0.1, 'exp_name': 'DQPN_p30_e0.10', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 30000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
Serena47/doodling-ai2
Serena47
null
19
138
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
422
### doodling-ai2 Dreambooth model trained by Serena47 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Fred99774/isalau
Fred99774
null
19
0
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
417
### isalau Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
thanat/distilbert-base-uncased-finetuned-imdb
thanat
distilbert
8
3
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,586
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # thanat/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [imdb](https://huggingface.co/datasets/imdb) dataset. It achieves the following results on the evaluation set: - Train Loss: 2.6586 - Validation Loss: 2.5175 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.6586 | 2.5175 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
SfinOe/stable-diffusion-v2-1
SfinOe
null
18
15
diffusers
0
text-to-image
false
false
false
openrail++
null
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
12,114
# Stable Diffusion v2-1 Model Card This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) (`768-v-ema.ckpt`) with an additional 55k steps on the same dataset (with `punsafe=0.1`), and then fine-tuned for another 155k extra steps with `punsafe=0.98`. - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_768-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.ckpt). - Use it with 🧨 [`diffusers`](#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler model_id = "stabilityai/stable-diffusion-2-1" # Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://huggingface.co/runwayml/stable-diffusion-inpainting). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
MultiversexPeeps/duskfalls-artificial-photography
MultiversexPeeps
null
70
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
7,590
### Duskfalls Artificial Photography Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk Data Training Examples: rtrophto1 (use that on your prompt) ![rtrophto1 0](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%281%29.jpg)![rtrophto1 1](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%282%29.jpg)![rtrophto1 2](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%283%29.jpg)![rtrophto1 3](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%284%29.jpg)![rtrophto1 4](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%285%29.jpg)![rtrophto1 5](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%286%29.jpg)![rtrophto1 6](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%287%29.jpg)![rtrophto1 7](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%288%29.jpg)![rtrophto1 8](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%289%29.jpg)![rtrophto1 9](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2810%29.jpg)![rtrophto1 10](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2811%29.jpg)![rtrophto1 11](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2812%29.jpg)![rtrophto1 12](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2813%29.jpg)![rtrophto1 13](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2814%29.jpg)![rtrophto1 14](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2815%29.jpg)![rtrophto1 15](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2816%29.jpg)![rtrophto1 16](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2817%29.jpg)![rtrophto1 17](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2818%29.jpg)![rtrophto1 18](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2819%29.jpg)![rtrophto1 19](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2820%29.jpg)![rtrophto1 20](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2821%29.jpg)![rtrophto1 21](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2822%29.jpg)![rtrophto1 22](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2823%29.jpg)![rtrophto1 23](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2824%29.jpg)![rtrophto1 24](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2825%29.jpg)![rtrophto1 25](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2826%29.jpg)![rtrophto1 26](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2827%29.jpg)![rtrophto1 27](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2828%29.jpg)![rtrophto1 28](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2829%29.jpg)![rtrophto1 29](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2830%29.jpg)![rtrophto1 30](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2831%29.jpg)![rtrophto1 31](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2832%29.jpg)![rtrophto1 32](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2833%29.jpg)![rtrophto1 33](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2834%29.jpg)![rtrophto1 34](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2835%29.jpg)![rtrophto1 35](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2836%29.jpg)![rtrophto1 36](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2837%29.jpg)![rtrophto1 37](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2838%29.jpg)![rtrophto1 38](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2839%29.jpg)![rtrophto1 39](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2840%29.jpg)![rtrophto1 40](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2841%29.jpg)![rtrophto1 41](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2842%29.jpg)![rtrophto1 42](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2843%29.jpg)![rtrophto1 43](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2844%29.jpg)![rtrophto1 44](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2845%29.jpg)![rtrophto1 45](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2846%29.jpg)![rtrophto1 46](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2847%29.jpg)![rtrophto1 47](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2848%29.jpg)![rtrophto1 48](https://huggingface.co/Duskfallcrew/duskfalls-artificial-photography/resolve/main/concept_images/rtrophto1_%2849%29.jpg)
pfunk/Pong-v4-DQPN_p30_e0.25-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,990
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_e0.25.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p30_e0.25]" python -m cleanrl_utils.enjoy --exp-name DQPN_p30_e0.25 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.25-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.25-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.25-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p30_e0.25 --start-policy-f 30000 --end-policy-f 1000 --evaluation-fraction 0.25 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 1000, 'env_id': 'Pong-v4', 'evaluation_fraction': 0.25, 'exp_name': 'DQPN_p30_e0.25', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 30000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
Mizuiro-sakura/luke-japanese-base-lite-jsquad
Mizuiro-sakura
luke
13
8
transformers
0
question-answering
true
false
false
mit
['ja']
null
null
0
0
0
0
0
0
0
['luke', 'question-answering', 'squad', 'pytorch', 'transformers', 'question answering']
false
true
true
2,698
# このモデルはluke-japanese-base-liteをファインチューニングして、Question-Answeringに用いれるようにしたものです。 このモデルはluke-japanese-base-liteをJSQuAD ( https://github.com/yahoojapan/JGLUE )を用いてファインチューニングしたものです。 Question-Answeringタスク(SQuAD)に用いることができます。 # This model is fine-tuned model for Question-Answering which is based on luke-japanese-base-lite This model is fine-tuned by using JSQuAD dataset. You could use this model for Question-Answering tasks. # モデルの精度 accuracy of model 'em(厳密一致)': 0.7582170193606483, 'f1': 0.8761199970544952 # How to use 使い方 以下のコードを実行することで、Question-Answeringタスクを解かせることができます。 please execute this code. ```python import torch from transformers import MLukeTokenizer, AutoModelForQuestionAnswering tokenizer = MLukeTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-lite-jsquad') model=AutoModelForQuestionAnswering.from_pretrained('Mizuiro-sakura/luke-japanese-base-lite-jsquad')# 学習済みモデルの読み込み text={ 'context':'私の名前はEIMIです。好きな食べ物は苺です。 趣味は皆さんと会話することです。', 'question' :'好きな食べ物は何ですか' } input_ids=tokenizer.encode(text['question'],text['context']) # tokenizerで形態素解析しつつコードに変換する con=tokenizer.encode(text['question']) output= model(torch.tensor([input_ids])) # 学習済みモデルを用いて解析 prediction = tokenizer.decode(input_ids[torch.argmax(output.start_logits)-2: torch.argmax(output.end_logits)-1]) # 答えに該当する部分を抜き取る prediction=prediction.replace('</s>','') print(prediction) ``` # what is Luke? Lukeとは?[1] LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores. LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。 # Acknowledgments 謝辞 Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia. # Citation [1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
dn-gh/TQC-PandaReachDense-v2
dn-gh
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['PandaReachDense-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
358
# **TQC** Agent playing **PandaReachDense-v2** This is a trained model of a **TQC** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Josh98/t5-small-finetuned-English-to-BASH
Josh98
t5
15
4
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,985
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-English-to-BASH This model is a fine-tuned version of [kevinum/t5-small-finetuned-English-to-BASH](https://huggingface.co/kevinum/t5-small-finetuned-English-to-BASH) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7624 - Bleu: 15.8119 - Gen Len: 7.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 36 | 2.4759 | 9.4129 | 12.8472 | | No log | 2.0 | 72 | 2.2581 | 14.8612 | 9.7639 | | No log | 3.0 | 108 | 2.0998 | 16.1955 | 8.7222 | | No log | 4.0 | 144 | 1.9945 | 14.576 | 8.4444 | | No log | 5.0 | 180 | 1.9181 | 15.4464 | 8.1806 | | No log | 6.0 | 216 | 1.8639 | 14.7446 | 7.9028 | | No log | 7.0 | 252 | 1.8185 | 14.5825 | 8.0833 | | No log | 8.0 | 288 | 1.7867 | 14.9773 | 7.9444 | | No log | 9.0 | 324 | 1.7679 | 15.8119 | 7.75 | | No log | 10.0 | 360 | 1.7624 | 15.8119 | 7.75 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
AtulRR/AI
AtulRR
null
2
0
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
4,907
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
Josh98/t5-small-transferLearning-NL2BASH_seqTrain
Josh98
t5
15
8
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,640
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-transferLearning-NL2BASH_seqTrain This model is a fine-tuned version of [kevinum/t5-small-finetuned-English-to-BASH](https://huggingface.co/kevinum/t5-small-finetuned-English-to-BASH) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6524 - Bleu: 48.0701 - Gen Len: 8.9028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 36 | 0.6524 | 48.0701 | 8.9028 | | No log | 2.0 | 72 | 0.6524 | 48.0701 | 8.9028 | | No log | 3.0 | 108 | 0.6524 | 48.0701 | 8.9028 | | No log | 4.0 | 144 | 0.6524 | 48.0701 | 8.9028 | | No log | 5.0 | 180 | 0.6524 | 48.0701 | 8.9028 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
robinsk8a/Reinforce-Pixelcopter-PLE-v0
robinsk8a
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pixelcopter-PLE-v0', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
300
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
MultiversexPeeps/duskfall-s-pink-spider-plushie
MultiversexPeeps
null
21
5
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
867
### Duskfall's Pink Spider Plushie Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk plushiedsk (use that on your prompt)
layoric/ppo-LunarLander-v2
layoric
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DiegoD616/poca-SoccerTwos
DiegoD616
null
20
335
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
843
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: DiegoD616/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
TkskKurumi/KurumiMix
TkskKurumi
null
23
20
diffusers
1
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,148
# KurumiMix ## Composition ### unet weights The model weights are interpolated with same composition in all UNet blocks. |Model|Contribution| |-|-| |[PastelMix](https://huggingface.co/andite/pastel-mix)|40%| |[Counterfeit V2.5](https://huggingface.co/gsdf/Counterfeit-V2.5)|20%| |Counterfeit V2.2|20%| |[EimisAnimeDiffusion](https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v)|10%| |[BasilMix](https://huggingface.co/nuigurumi/basil_mix)|5%| |[AbyssOrangeMix2](https://huggingface.co/WarriorMama777/OrangeMixs)|5%| ### vae weights Pastel mix's vae is colorful and beautiful, but a bit over-saturated in my view. Mix a little bit other vae. |Model|Contribution| |-|-| |[orangemix.vae.pt](https://huggingface.co/WarriorMama777/OrangeMixs)|10%| |[pastel-waifu-diffusion.vae.pt](https://huggingface.co/andite/pastel-mix)|90%| ## samples ![](https://huggingface.co/TkskKurumi/KurumiMix/resolve/main/gallery/005.jpg) ![](https://huggingface.co/TkskKurumi/KurumiMix/resolve/main/gallery/001.jpg) ![](https://huggingface.co/TkskKurumi/KurumiMix/resolve/main/gallery/003.png) ![](https://huggingface.co/TkskKurumi/KurumiMix/resolve/main/gallery/004.jpg)
yuanzheng/carrot-commercial-v2
yuanzheng
null
25
18
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
1,302
### carrot_commercial_v2 Dreambooth model Sample pictures of this concept: ![0](https://huggingface.co/yuanzheng/carrot-commercial-v2/resolve/main/sample_images/00174-2092912628-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![1](https://huggingface.co/yuanzheng/carrot-commercial-v2/resolve/main/sample_images/00087-1720633401-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![2](https://huggingface.co/yuanzheng/carrot-commercial-v2/resolve/main/sample_images/00088-1720633402-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![3](https://huggingface.co/yuanzheng/carrot-commercial-v2/resolve/main/sample_images/00079-4004019013-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png) ![4](https://huggingface.co/yuanzheng/carrot-commercial-v2/resolve/main/sample_images/00121-1978687305-_pid_sayuri_sake__japanese_sake_on_the_desk_with_assorted_sushi_at_a_fancy_Japanese_restaurant,_cybercinematic_lighting,_studio.png)
shafa/bert-finetuned-squad
shafa
bert
14
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
HaiderAUT/ppo-LunarLander-v2
HaiderAUT
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Maciel/T5Corrector-base-v1
Maciel
t5
9
14
transformers
0
text2text-generation
true
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
['t5', 'text error correction']
false
true
true
1,591
## 功能介绍 T5Corrector:中文字音与字形纠错模型 这个模型是基于mengzi-t5-base进行文本纠错训练,使用500w+句子,通过替换同音词、近音词和形近字来构造纠错平行语料,共计3kw+句对,累计训练45000步。 <a href='https://github.com/Macielyoung/T5Corrector'>Github项目地址</a> 加载模型: ```python # 加载模型 from transformers import T5Tokenizer, T5ForConditionalGeneration pretrained = "Maciel/T5Corrector-base-v1" tokenizer = T5Tokenizer.from_pretrained(pretrained) model = T5ForConditionalGeneration.from_pretrained(pretrained) ``` 使用模型进行预测推理方法: ```python import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) def correct(text, max_length): model_inputs = tokenizer(text, max_length=max_length, truncation=True, return_tensors="pt").to(device) output = model.generate(**model_inputs, num_beams=5, no_repeat_ngram_size=4, do_sample=True, early_stopping=True, max_length=max_length, return_dict_in_generate=True, output_scores=True) pred_output = tokenizer.batch_decode(output.sequences, skip_special_tokens=True)[0] return pred_output text = "听到这个消息,心情真的蓝瘦" correction = correct(text, max_length=32) print(correction) ``` ### 案例展示 ``` 示例1: input: 听到这个消息,心情真的蓝瘦 output: 听到这个消息,心情真的难受 示例2: input: 脑子有点胡涂了,这道题冥冥学过还没有做出来 output: 脑子有点糊涂了,这道题明明学过还没有做出来 示例3: input: 今天天气不太好,我的心情也不是很偷快 output: 今天天气不太好,我的心情也不是很愉快 ```
jason1i/poca-SoccerTwosv2
jason1i
null
41
329
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
843
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: jason1i/poca-SoccerTwosv2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
aichina/cy0208
aichina
null
27
7
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,066
### cy0208 Dreambooth model trained by aichina with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: cy0208 (use that on your prompt) ![cy0208 0](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%281%29.jpg)![cy0208 1](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%282%29.jpg)![cy0208 2](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%283%29.jpg)![cy0208 3](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%284%29.jpg)![cy0208 4](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%285%29.jpg)![cy0208 5](https://huggingface.co/aichina/cy0208/resolve/main/concept_images/cy0208_%286%29.jpg)
TracyWang/MAUD_KWM_AWS_Roberta-base
TracyWang
roberta
13
17
transformers
0
text-classification
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['legal']
false
true
true
327
Dataset and Training Script offered by the Atticus Project MAUD. Trained on AWS Sagemaker with 4 A10 GPUs. Model owned by King & Wood Mallesons Law Firm AI LAB. Project Member: - Wuyue(Tracy) Wang @ King & Wood Mallesons - Anbei Zhao @ Amazon Web Services - Xiaodong Guo @ Amazon Web Services - Xiuyu Wu @ Peking University Reference: ``` @misc{wang2023maud, title={MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding}, author={Steven H. Wang and Antoine Scardigli and Leonard Tang and Wei Chen and Dimitry Levkin and Anya Chen and Spencer Ball and Thomas Woodside and Oliver Zhang and Dan Hendrycks}, year={2023}, eprint={2301.00876}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
MultiversexPeeps/duskfall-s-general-digital-art-model
MultiversexPeeps
null
21
6
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
1
0
1
0
0
0
0
['text-to-image', 'art', 'digital art', 'stable diffusion']
false
true
true
1,224
[![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/MultiversexPeeps/duskfall-s-general-digital-art-model) ### Duskfall's General Digital Art Model Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/user/duskfallcrew If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk gendigi (use that on your prompt)
bryanhpchiang/flan-t5-base-samsum
bryanhpchiang
t5
7
9
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['samsum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
913
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.0.dev20221005+cu117 - Datasets 2.5.2 - Tokenizers 0.13.2
HaiderAUT/trpo-LunarLander-v2
HaiderAUT
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
352
# **TRPO** Agent playing **LunarLander-v2** This is a trained model of a **TRPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pfunk/Pong-v4-DQPN_p50_pt0.1-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,990
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p50_pt0.1.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p50_pt0.1]" python -m cleanrl_utils.enjoy --exp-name DQPN_p50_pt0.1 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p50_pt0.1-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p50_pt0.1 --start-policy-f 50000 --end-policy-f 50000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 50000, 'env_id': 'Pong-v4', 'evaluation_fraction': 1.0, 'exp_name': 'DQPN_p50_pt0.1', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 0.1, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 50000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
pupubear/pupu_girl_anime_attempt
pupubear
null
24
53
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
1,403
### PuPu_girl_ver2 Dreambooth model trained by pupubear with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/pupubear/pupu-girl-ver2/resolve/main/sample_images/00003-176733799-Ultra-res_,NSFW,_1girl,_cum,_full_body,,_best_quality,highly_detailed,masterpiece,ultra-detailed,illustration.png) ![1](https://huggingface.co/pupubear/pupu-girl-ver2/resolve/main/sample_images/grid-0000.png) ![2](https://huggingface.co/pupubear/pupu-girl-ver2/resolve/main/sample_images/00001-176733797-Ultra-res_,NSFW,_1girl,_cum,_full_body,,_best_quality,highly_detailed,masterpiece,ultra-detailed,illustration.png) ![3](https://huggingface.co/pupubear/pupu-girl-ver2/resolve/main/sample_images/00002-176733798-Ultra-res_,NSFW,_1girl,_cum,_full_body,,_best_quality,highly_detailed,masterpiece,ultra-detailed,illustration.png) ![4](https://huggingface.co/pupubear/pupu-girl-ver2/resolve/main/sample_images/00000-176733796-Ultra-res_,NSFW,_1girl,_cum,_full_body,,_best_quality,highly_detailed,masterpiece,ultra-detailed,illustration.png)
petergoldstein/ppo-Huggy
petergoldstein
null
32
8
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
false
true
true
825
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: petergoldstein/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
pfunk/Pong-v4-DQPN_p100_pt0.1_tt0.1-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
2,050
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p100_pt0.1_tt0.1.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p100_pt0.1_tt0.1]" python -m cleanrl_utils.enjoy --exp-name DQPN_p100_pt0.1_tt0.1 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1_tt0.1-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1_tt0.1-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p100_pt0.1_tt0.1-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p100_pt0.1_tt0.1 --start-policy-f 100000 --end-policy-f 100000 --evaluation-fraction 1.00 --target-tau 0.1 --policy-tau 0.1 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 100000, 'env_id': 'Pong-v4', 'evaluation_fraction': 1.0, 'exp_name': 'DQPN_p100_pt0.1_tt0.1', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 0.1, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 100000, 'target_network_frequency': 1000, 'target_tau': 0.1, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
layoric/ppo-Huggy
layoric
null
32
23
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Huggy']
false
true
true
818
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: layoric/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Fred99774/kalssa
Fred99774
null
19
12
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
417
### kalssa Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Jaeung/xlm-roberta-base-finetuned-panx-de
Jaeung
xlm-roberta
18
0
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,326
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1358 - F1: 0.8495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3842 | 1.0 | 99 | 0.1687 | 0.8120 | | 0.1526 | 2.0 | 198 | 0.1447 | 0.8355 | | 0.1139 | 3.0 | 297 | 0.1358 | 0.8495 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.0.dev20230129 - Datasets 2.9.0 - Tokenizers 0.13.2
SyedAbdul/PPO-LunarLander-V2
SyedAbdul
null
12
1
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nc33/my_awesome_wnut_model
nc33
roberta
16
0
transformers
0
token-classification
true
false
false
mit
null
['wnut_17']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,455
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [facebook/muppet-roberta-base](https://huggingface.co/facebook/muppet-roberta-base) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2298 - Precision: 0.5607 - Recall: 0.5097 - F1: 0.5340 - Accuracy: 0.9501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2331 | 0.5333 | 0.4310 | 0.4767 | 0.9459 | | No log | 2.0 | 426 | 0.2298 | 0.5607 | 0.5097 | 0.5340 | 0.9501 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
catlord/bert-finetuned-squad
catlord
bert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
luhui/marian-finetuned-kde4-en-to-fr
luhui
marian
15
2
transformers
0
translation
true
false
false
apache-2.0
null
['kde4']
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,075
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.1453 - Bleu: 41.5822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
courtneypham/bert-finetuned-squad
courtneypham
bert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
joe138138/bert-finetuned-squad
joe138138
bert
20
9
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
skim945/bert-finetuned-squad
skim945
bert
12
8
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
alvintu/bert-finetuned-squad
alvintu
bert
12
8
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
freedomtw/stable_diffusion_tflite
freedomtw
null
13
0
null
0
null
false
false
false
openrail
null
null
null
0
0
0
0
0
0
0
['tflite', 'stable_diffusion']
false
true
true
1,045
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> Stable Diffusion TFLite models # Model Details converted from [Keras CV Stable Diffusion](https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion) ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s) (NLP):** English - **License:** The CreativeML OpenRAIL M license is an Open RAIL M license, adapted from the work that BigScience and the RAIL Initiative are jointly carrying in the area of responsible AI licensing. See also the article about the BLOOM Open RAIL license on which our license is based. ## Model Sources <!-- Provide the basic links for the model. --> - **conversion script:** https://github.com/freedomtan/keras_cv_stable_diffusion_to_tflite - **converted from:** https://github.com/keras-team/keras-cv/tree/master/keras_cv/models/stable_diffusion
puggykk/distilbert-base-uncased-finetuned-cola
puggykk
distilbert
35
2
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8057 - Matthews Correlation: 0.5393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5227 | 1.0 | 535 | 0.5274 | 0.4067 | | 0.349 | 2.0 | 1070 | 0.4952 | 0.5022 | | 0.2385 | 3.0 | 1605 | 0.5524 | 0.5351 | | 0.184 | 4.0 | 2140 | 0.7586 | 0.5222 | | 0.1335 | 5.0 | 2675 | 0.8057 | 0.5393 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Liyannnn/bert-finetuned-squad
Liyannnn
bert
16
10
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
bbbbearczx/bert-finetuned-squad
bbbbearczx
bert
14
10
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
sayakpaul/git-base-pokemon
sayakpaul
git
15
16
transformers
0
image-to-text
true
false
false
mit
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'image-to-text']
true
true
true
2,000
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0350 - Wer Score: 2.2148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.3616 | 4.17 | 50 | 4.5895 | 21.4258 | | 2.4353 | 8.33 | 100 | 0.4961 | 9.9322 | | 0.1527 | 12.5 | 150 | 0.0303 | 1.3197 | | 0.0192 | 16.67 | 200 | 0.0260 | 1.3299 | | 0.007 | 20.83 | 250 | 0.0297 | 2.2059 | | 0.0027 | 25.0 | 300 | 0.0321 | 2.4795 | | 0.0017 | 29.17 | 350 | 0.0334 | 2.4488 | | 0.0014 | 33.33 | 400 | 0.0340 | 2.1355 | | 0.0013 | 37.5 | 450 | 0.0345 | 2.3619 | | 0.0012 | 41.67 | 500 | 0.0349 | 2.2084 | | 0.0011 | 45.83 | 550 | 0.0350 | 2.1803 | | 0.0011 | 50.0 | 600 | 0.0350 | 2.2148 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
HeeK/bert-finetuned-squad
HeeK
bert
14
10
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jojoUla/bert-large-cased-sigir-support-no-label-40
jojoUla
bert
13
25
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,198
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-sigir-support-no-label-40 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1107 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 30 - eval_batch_size: 30 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7638 | 1.0 | 246 | 2.2805 | | 2.1924 | 2.0 | 492 | 1.9602 | | 1.8921 | 3.0 | 738 | 1.7992 | | 1.7412 | 4.0 | 984 | 1.7229 | | 1.6311 | 5.0 | 1230 | 1.6165 | | 1.5421 | 6.0 | 1476 | 1.5400 | | 1.4619 | 7.0 | 1722 | 1.5001 | | 1.3846 | 8.0 | 1968 | 1.4381 | | 1.3414 | 9.0 | 2214 | 1.4285 | | 1.2894 | 10.0 | 2460 | 1.4108 | | 1.2467 | 11.0 | 2706 | 1.3460 | | 1.1992 | 12.0 | 2952 | 1.3434 | | 1.1612 | 13.0 | 3198 | 1.2951 | | 1.1266 | 14.0 | 3444 | 1.2518 | | 1.0933 | 15.0 | 3690 | 1.2825 | | 1.0625 | 16.0 | 3936 | 1.2523 | | 1.0386 | 17.0 | 4182 | 1.2251 | | 1.0066 | 18.0 | 4428 | 1.2339 | | 0.9755 | 19.0 | 4674 | 1.1887 | | 0.9656 | 20.0 | 4920 | 1.2288 | | 0.9517 | 21.0 | 5166 | 1.1391 | | 0.9207 | 22.0 | 5412 | 1.1718 | | 0.8964 | 23.0 | 5658 | 1.1850 | | 0.8891 | 24.0 | 5904 | 1.1306 | | 0.8564 | 25.0 | 6150 | 1.1956 | | 0.851 | 26.0 | 6396 | 1.1263 | | 0.8331 | 27.0 | 6642 | 1.1060 | | 0.8143 | 28.0 | 6888 | 1.0689 | | 0.7972 | 29.0 | 7134 | 1.0772 | | 0.7857 | 30.0 | 7380 | 1.1103 | | 0.7687 | 31.0 | 7626 | 1.1635 | | 0.7653 | 32.0 | 7872 | 1.0736 | | 0.777 | 33.0 | 8118 | 1.1103 | | 0.741 | 34.0 | 8364 | 1.0830 | | 0.7408 | 35.0 | 8610 | 1.0809 | | 0.736 | 36.0 | 8856 | 1.0894 | | 0.7362 | 37.0 | 9102 | 1.0691 | | 0.727 | 38.0 | 9348 | 1.0519 | | 0.715 | 39.0 | 9594 | 1.0919 | | 0.7286 | 40.0 | 9840 | 1.1107 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
muhammaddjunas/cvt-13-finetuned-waste
muhammaddjunas
cvt
14
2
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,421
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cvt-13-finetuned-waste This model is a fine-tuned version of [microsoft/cvt-13](https://huggingface.co/microsoft/cvt-13) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1715 | 0.99 | 117 | 0.0000 | 1.0 | | 0.1194 | 1.99 | 234 | 0.0000 | 1.0 | | 0.1496 | 2.99 | 351 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Zekunli/flan-t5-large-da-multiwoz_250
Zekunli
t5
10
13
transformers
0
text2text-generation
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,938
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz_250 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3959 - Accuracy: 38.8681 - Num: 3689 - Gen Len: 15.6736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 24 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 0.4158 | 0.93 | 200 | 0.4439 | 34.537 | 3689 | 15.8452 | | 0.3487 | 1.86 | 400 | 0.4358 | 35.7656 | 3689 | 15.6495 | | 0.3596 | 2.79 | 600 | 0.4304 | 35.4046 | 3689 | 14.8946 | | 0.3676 | 3.72 | 800 | 0.4186 | 36.5036 | 3689 | 15.0016 | | 0.4259 | 4.65 | 1000 | 0.4082 | 36.491 | 3689 | 15.4118 | | 0.4005 | 5.58 | 1200 | 0.4039 | 37.4827 | 3689 | 15.8615 | | 0.3922 | 6.51 | 1400 | 0.4009 | 38.1076 | 3689 | 15.4286 | | 0.3656 | 7.44 | 1600 | 0.3998 | 38.8275 | 3689 | 15.7021 | | 0.3709 | 8.37 | 1800 | 0.3959 | 38.8681 | 3689 | 15.6736 | | 0.3564 | 9.3 | 2000 | 0.3981 | 38.6742 | 3689 | 15.8406 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
sanali209/imclasif-races-0-v001
sanali209
vit
8
25
transformers
0
image-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['image-classification', 'pytorch', 'huggingpics']
false
true
true
333
# imclasif-races-0-v001 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
lmqg/mbart-large-cc25-frquad-qg-ae
lmqg
mbart
20
2
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['fr']
['lmqg/qg_frquad']
null
0
0
0
0
0
0
0
['question generation', 'answer extraction']
true
true
true
7,671
# Model Card of `lmqg/mbart-large-cc25-frquad-qg-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation and answer extraction jointly on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qg-ae") # answer extraction answer = pipe("generate question: Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") # question generation question = pipe("extract answers: Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées ».") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 72.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 16.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 4.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 1.85 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 0.91 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 8.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 50.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 18.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 77.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 51.65 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 76.9 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 51.15 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 78.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 52.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 3.66 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 58.41 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 2.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 0.76 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 3.24 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 45.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 3.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 5 - batch: 2 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 32 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Art-phys/dqn-SpaceInvadersNoFrameskip-v4
Art-phys
null
15
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['SpaceInvadersNoFrameskip-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
2,217
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Art-phys -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Art-phys -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Art-phys ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
yunqiang/test
yunqiang
null
3
0
null
0
null
false
false
false
null
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,907
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
ahjim0m0/q-FrozenLake-v1-4x4-noSlippery
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
397
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ernie-ai/finetuned-vit-image-text-classifier
ernie-ai
vit
14
5
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['image-classification', 'generated_from_trainer']
true
true
true
1,556
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-vit-doc-text-classifer This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ernie-ai/image-text-examples-ar-cn-latin-notext dataset. It achieves the following results on the evaluation set: - Loss: 0.3107 - Accuracy: 0.9030 ## Model description It is an image classificatin model fine-tuned to predict whether an images contains text and if that text is Latin script, Chinese or Arabic. It also classifies non-text images. ## Training and evaluation data Dataset: [ernie-ai/image-text-examples-ar-cn-latin-notext] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2719 | 2.08 | 100 | 0.4120 | 0.8657 | | 0.1027 | 4.17 | 200 | 0.3907 | 0.8881 | | 0.0723 | 6.25 | 300 | 0.3107 | 0.9030 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
ahjim0m0/Taxi-uncle-1-v3
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
370
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/Taxi-uncle-1-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
SandyML/sd-class-butterflies-32
SandyML
null
6
2
diffusers
0
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
true
true
364
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('SandyML/sd-class-butterflies-32') image = pipeline().images[0] image ```
ahjim0m0/Taxi-uncle-2-lr02-n60k-v3
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
380
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/Taxi-uncle-2-lr02-n60k-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Ramuvannela/Stanford-Sentiment-Treebank
Ramuvannela
bert
4
16
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
891
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Stanford-Sentiment-Treebank This model is a fine-tuned version of [gchhablani/bert-base-cased-finetuned-sst2](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
ahjim0m0/Taxi-uncle-3-lr05-n30k-v3
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
380
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/Taxi-uncle-3-lr05-n30k-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
nickwong64/bert-base-uncased-poems-sentiment
nickwong64
bert
8
15
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['poem_sentment']
null
0
0
0
0
0
0
0
['text-classification', 'sentiment-analysis', 'poem-sentiment-detection', 'poem-sentiment']
false
true
true
1,701
## nickwong64/bert-base-uncased-poems-sentiment Bert is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective. [bert-base-uncased](https://huggingface.co/bert-base-uncased) finetuned on the [poem_sentiment](https://huggingface.co/datasets/poem_sentiment) dataset using HuggingFace Trainer with below training parameters. ``` learning rate 2e-5, batch size 8, num_train_epochs=8, ``` ## Model Performance | Epoch | Training Loss | Validation Loss | Accuracy | F1 | | --- | --- | --- | --- | --- | | 8 | 0.468200 | 0.458632 | 0.904762 | 0.899756 | ## How to Use the Model ```python from transformers import pipeline nlp = pipeline(task='text-classification', model='nickwong64/bert-base-uncased-poems-sentiment') p1 = "No man is an island, Entire of itself, Every man is a piece of the continent, A part of the main." p2 = "Ten years, dead and living dim and draw apart. I don’t try to remember, But forgetting is hard." p3 = "My mind to me a kingdom is; Such present joys therein I find,That it excels all other bliss" print(nlp(p1)) print(nlp(p2)) print(nlp(p3)) """ output: [{'label': 'no_impact', 'score': 0.9982421398162842}] [{'label': 'negative', 'score': 0.9856176972389221}] [{'label': 'positive', 'score': 0.9931322932243347}] """ ``` ## Dataset [poem_sentiment](https://huggingface.co/datasets/poem_sentiment) ## Labels ``` {0: 'negative', 1: 'positive', 2: 'no_impact', 3: 'mixed'} ``` ## Evaluation ``` {'test_loss': 0.4359096586704254, 'test_accuracy': 0.9142857142857143, 'test_f1': 0.9120554830816401, 'test_runtime': 0.5689, 'test_samples_per_second': 184.582, 'test_steps_per_second': 24.611} ```
RyanM-R/bert-finetuned-squad
RyanM-R
bert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Hudayday/bert-finetuned-squad
Hudayday
bert
12
11
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
ahjim0m0/Taxi-uncle-3-lr05-n100k-v3
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
381
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/Taxi-uncle-3-lr05-n100k-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ernie-ai/autotrain-document-text-language-ar-en-zh-3338392240
ernie-ai
swin
5
4
transformers
1
image-classification
true
false
false
null
null
['ernie-ai/autotrain-data-document-text-language-ar-en-zh']
{'emissions': 2.2266908460523576}
0
0
0
0
0
0
0
['autotrain', 'vision', 'image-classification']
false
true
true
1,069
# finetuned-MS-swin-doc-text-classifer This model is a fine-tuned version of Microsoft’s Swin Transformer tiny-sized model [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the ernie-ai/image-text-examples-ar-cn-latin-notext dataset. It achieves the following results on the evaluation set: - Loss: 0.267 - Accuracy: 0.882 ## Model description It is an image classificatin model fine-tuned to predict whether an images contains text and if that text is Latin script, Chinese or Arabic. It also classifies non-text images. ## Training and evaluation data Dataset: [ernie-ai/image-text-examples-ar-cn-latin-notext] # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 3338392240 - CO2 Emissions (in grams): 2.2267 ## Validation Metrics - Loss: 0.267 - Accuracy: 0.882 - Macro F1: 0.862 - Micro F1: 0.882 - Weighted F1: 0.880 - Macro Precision: 0.877 - Micro Precision: 0.882 - Weighted Precision: 0.883 - Macro Recall: 0.856 - Micro Recall: 0.882 - Weighted Recall: 0.882
ahjim0m0/Taxi-uncle-4-lr02-n100k-g097-v3
ahjim0m0
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
386
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ahjim0m0/Taxi-uncle-4-lr02-n100k-g097-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
raychang7/bert-finetuned-squad
raychang7
bert
12
13
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Zekunli/flan-t5-large-da-multiwoz_200
Zekunli
t5
10
0
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,245
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-da-multiwoz_200 This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4017 - Accuracy: 38.6087 - Num: 3689 - Gen Len: 15.9119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 24 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:|:-------:| | 1.3688 | 1.15 | 200 | 0.5608 | 24.9936 | 3689 | 14.1176 | | 0.6174 | 2.3 | 400 | 0.4656 | 32.8516 | 3689 | 15.7346 | | 0.5123 | 3.45 | 600 | 0.4334 | 33.9944 | 3689 | 15.9504 | | 0.4814 | 4.6 | 800 | 0.4150 | 33.4465 | 3689 | 14.9143 | | 0.4429 | 5.75 | 1000 | 0.4134 | 35.7032 | 3689 | 15.9791 | | 0.4173 | 6.9 | 1200 | 0.4078 | 37.8029 | 3689 | 16.3285 | | 0.399 | 8.05 | 1400 | 0.3991 | 38.0948 | 3689 | 15.2914 | | 0.384 | 9.2 | 1600 | 0.3992 | 38.1389 | 3689 | 15.6996 | | 0.3664 | 10.34 | 1800 | 0.4046 | 37.3672 | 3689 | 15.8149 | | 0.3629 | 11.49 | 2000 | 0.4026 | 38.2154 | 3689 | 15.8707 | | 0.3508 | 12.64 | 2200 | 0.4021 | 38.4623 | 3689 | 15.6454 | | 0.3429 | 13.79 | 2400 | 0.4036 | 38.5917 | 3689 | 15.6514 | | 0.3485 | 14.94 | 2600 | 0.4017 | 38.6087 | 3689 | 15.9119 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
jrahn/yolochess_mlm_azure-cloud-35
jrahn
distilbert
8
8
transformers
0
fill-mask
true
false
false
mit
null
['jrahn/yolochess_lichess-elite_2211']
null
0
0
0
0
0
0
0
['chess']
false
true
true
4,274
# Model Card for yolochess_mlm_azure-cloud-35 <!-- Provide a quick summary of what the model is/does. --> This model with 66M parameters is pre-trained from scratch with Masked Language Modeling on Chess Positions in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format. It is supposed to be used for downstream fine-tuning, e.g. Text Classification for human moves. # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Jonathan Rahn - **Model type:** Distilbert - **Language(s) (NLP):** Chess [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) - **License:** MIT # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is pre-trained from scratch with Masked Language Modeling on Chess Positions in FEN format. ## Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> It is supposed to be used for downstream fine-tuning, e.g. Text Classification for human moves. ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Anything other than Chess Positions in standard [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format. # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> n/a ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> n/a ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("jrahn/yolochess_mlm_azure-cloud-35") model = AutoModelForMaskedLM.from_pretrained("jrahn/yolochess_mlm_azure-cloud-35") ``` ```python from transformers import pipeline pipe = pipeline("fill-mask", "jrahn/yolochess_mlm_azure-cloud-35") pipe("6k1/8/8/1pB3[MASK]P/1P3P2/8/8/8 w - - 1 74") ``` # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [Lichess-Elite 22-11 Dataset](https://huggingface.co/datasets/jrahn/yolochess_lichess-elite_2211) ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> Masked Language Modeling objective with 15% masked token ratio. ### Preprocessing Tokenize `data["train"]["fen"]` with max-length padding to 200 tokens with default `distilbert-base-cased` tokenizer. Inefficient: Most of the vocab is never observed in FEN, wasting embedding parameters. The sequence length / pos embedding size of model and sequence length of data preprocessing leads to lots of padding and wasted parameters. FENs should be shorter than 90 characters. Experiments with reduced max-length in tokenization show performance gains. ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> Training for 172500 steps at batch-size 128 (22M examples, 1 epoch) took ~10 hrs on 1x RTX 4090, using 20GB VRAM, with final MLM-loss: 0.2567. # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1x RTX 4090 - **Hours used:** 10 - **Cloud Provider:** local - **Compute Region:** local - **Carbon Emitted:** 1.5kg # Technical Specifications ## Model Architecture and Objective Distilbert, Masked Language Modeling
hello2mao/sd-class-butterflies-32
hello2mao
null
6
2
diffusers
0
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
false
true
true
366
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('hello2mao/sd-class-butterflies-32') image = pipeline().images[0] image ```
mingcai/ESimCSE-chinese-bert-wwm
mingcai
bert
6
4
transformers
0
feature-extraction
true
false
false
null
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
404
基于论文ESimCSE进行复现,基于STS-B训练集进行训练,在中文STS-B的验证集spermanr相关性得分为0.7226. 论文参考: @inproceedings{Wu2021ESimCSEES, title={ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding}, author={Xing Wu and Chaochen Gao and Liangjun Zang and Jizhong Han and Zhongyuan Wang and Songlin Hu}, booktitle={International Conference on Computational Linguistics}, year={2021} }
Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-3e
Gokulapriyan
swin
14
4
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,487
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-3e This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1065 - Accuracy: 0.9606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4549 | 1.0 | 527 | 0.2910 | 0.8857 | | 0.2838 | 2.0 | 1054 | 0.1524 | 0.9410 | | 0.254 | 3.0 | 1581 | 0.1065 | 0.9606 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
BeaW/whisper-small-pyttsx2
BeaW
whisper
20
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hi']
['logistics']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,078
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper small 2 - BeaW This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chat analysis dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.7.1+cu110 - Datasets 2.8.0 - Tokenizers 0.11.0
jannikskytt/ppo-snowballTarget
jannikskytt
null
20
1
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SnowballTarget']
false
true
true
858
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: jannikskytt/ppo-snowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
UtopiansRareTruth/poca-SoccerTwos
UtopiansRareTruth
null
34
306
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-SoccerTwos']
false
true
true
851
# **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: UtopiansRareTruth/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
iubeda/q-FrozenLake-v1-4x4-noSlippery
iubeda
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['FrozenLake-v1-4x4-no_slippery', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
395
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="iubeda/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
iubeda/q-Taxi-v3
iubeda
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
362
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="iubeda/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
pfunk/Pong-v4-DQPN_p30_e0.50-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,989
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p30_e0.50.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p30_e0.50]" python -m cleanrl_utils.enjoy --exp-name DQPN_p30_e0.50 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.50-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.50-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p30_e0.50-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p30_e0.50 --start-policy-f 30000 --end-policy-f 1000 --evaluation-fraction 0.50 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 1000, 'env_id': 'Pong-v4', 'evaluation_fraction': 0.5, 'exp_name': 'DQPN_p30_e0.50', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 30000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
jidbo/BME-NaturalQuestions
jidbo
bert
6
15
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
956
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
mshibatatt/ppo-LunarLander-v2
mshibatatt
null
12
0
stable-baselines3
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['LunarLander-v2', 'deep-reinforcement-learning', 'reinforcement-learning', 'stable-baselines3']
true
true
true
350
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nblokker/debatenet-2-cat
nblokker
xlm-roberta
13
29
sentence-transformers
0
sentence-similarity
true
false
false
mit
['multilingual', 'de', 'en']
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
5,364
# nblokker/debatenet-2-cat This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model can be used to identify sentences that contain similar migration-related demands and propositions. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('nblokker/debatenet-2-cat') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('nblokker/debatenet-2-cat') model = AutoModel.from_pretrained('nblokker/debatenet-2-cat') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 38 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchHardSoftMarginTripletLoss.BatchHardSoftMarginTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 15, "evaluation_steps": 120.5, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 120.5, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors ``` @preprint{blokker2023, author = {Blokker, Nico and Blessing, Andre and Dayanik, Erenay and Kuhn, Jonas and Padó, Sebastian and Lapesa, Gabriella}, note = {To appear in \textit{Language Resources and Evaluation}}, title = {Between welcome culture and border fence: The {E}uropean refugee crisis in {G}erman newspaper reports}, url = {https://arxiv.org/abs/2111.10142}, year = 2023 } @inproceedings{lapesa2020, abstract = {DEbateNet-migr15 is a manually annotated dataset for German which covers the public debate on immigration in 2015. The building block of our annotation is the political science notion of a claim, i.e., a statement made by a political actor (a politician, a party, or a group of citizens) that a specific action should be taken (e.g., vacant flats should be assigned to refugees). We identify claims in newspaper articles, assign them to actors and fine-grained categories and annotate their polarity and date. The aim of this paper is two-fold: first, we release the full DEbateNet-mig15 corpus and document it by means of a quantitative and qualitative analysis; second, we demonstrate its application in a discourse network analysis framework, which enables us to capture the temporal dynamics of the political debate.}, address = {Online}, author = {Lapesa, Gabriella and Blessing, Andre and Blokker, Nico and Dayanik, Erenay and Haunss, Sebastian and Kuhn, Jonas and Padó, Sebastian}, booktitle = {Proceedings of LREC}, pages = {919--927}, title = {{DEbateNet-mig15}: {T}racing the 2015 Immigration Debate in {G}ermany Over Time}, url = {https://www.aclweb.org/anthology/2020.lrec-1.115}, year = 2020 } ```
pfunk/Pong-v4-DQPN_p5-seed1
pfunk
null
11
0
cleanrl
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Pong-v4', 'deep-reinforcement-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
1,931
# (CleanRL) **DQN** Agent Playing **Pong-v4** This is a trained model of a DQN agent playing Pong-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/DQPN_p5.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[DQPN_p5]" python -m cleanrl_utils.enjoy --exp-name DQPN_p5 --env-id Pong-v4 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p5-seed1/raw/main/dqpn_atari.py curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p5-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/pfunk/Pong-v4-DQPN_p5-seed1/raw/main/poetry.lock poetry install --all-extras python dqpn_atari.py --exp-name DQPN_p5 --start-policy-f 5000 --end-policy-f 5000 --evaluation-fraction 1.00 --target-tau 1.0 --policy-tau 1.00 --track --wandb-entity pfunk --wandb-project-name dqpn --save-model true --upload-model true --hf-entity pfunk --env-id Pong-v4 --seed 1 --total-timesteps 10000000 ``` # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'end_policy_f': 5000, 'env_id': 'Pong-v4', 'evaluation_fraction': 1.0, 'exp_name': 'DQPN_p5', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': 'pfunk', 'learning_rate': 0.0001, 'learning_starts': 80000, 'policy_tau': 1.0, 'save_model': True, 'seed': 1, 'start_e': 1, 'start_policy_f': 5000, 'target_network_frequency': 1000, 'target_tau': 1.0, 'torch_deterministic': True, 'total_timesteps': 10000000, 'track': True, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': 'pfunk', 'wandb_project_name': 'dqpn'} ```
fathyshalab/clinic-kitchen_and_dining-roberta
fathyshalab
roberta
14
8
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,456
# fathyshalab/clinic-kitchen_and_dining-roberta This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/clinic-kitchen_and_dining-roberta") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Meina/MeinaMix
Meina
null
3
0
null
1
null
false
false
false
other
['en']
null
null
0
0
0
0
1
1
0
['anime', 'art', 'stable diffusion']
false
true
true
1,501
This is a project of mine, i aim to make a model that doesn't take many words or complex prompts in order to draw a good image, making easy for everyone to use. I also aim to make the images usable as wallpaper, profile picture and use cases like those, so i'm often making changed in the model for it to look better. I'll be updating the model every 1 to 2 weeks, also i'll be taking in consideration any feedback thats given to me. If you like my model and wants to support me in making this project a sucess, you can buy me a coffee to keep me awake! https://ko-fi.com/meina Recommendations of use: + Prompt: '(masterpiece, sidelighting, finely detailed beautiful eyes: 1.2)'. - Prompt: '(worst quality, low quality:1.4)'. The best samplers in most of the generations is DPM++ SDE/DPM++ SDE Karass at 20 to 30 steps, Euler A at 50 steps, with a CFG scale of 5 up to 10. As for the upscaler in most of the scenarios is R-ESRGAN 4x+ Anime6B, with 10 steps at 0.4 up to 0.6 denoising. The VAE is baked in all of the versions starting now with the 2.1! Not required BUT seen to improve the results: ENSD: 31337 , Clip skip 1 or 2. I hope you have fun trying out my model, feel free to reach out to me in case you have any feedback to give! In the merged models list: Meina Version 1, Kenshi, AbyssOrangeMix2, PastelMix and Grapefruit, i do not have the exact recipe because i did multiple mixings using block weighted merges with multiple settings and kept the better version of each merge.
aichina/cy02081
aichina
null
25
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
1,090
### cy02081 Dreambooth model trained by aichina with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: cy02081 (use that on your prompt) ![cy02081 0](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%281%29.jpg)![cy02081 1](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%282%29.jpg)![cy02081 2](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%283%29.jpg)![cy02081 3](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%284%29.jpg)![cy02081 4](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%285%29.jpg)![cy02081 5](https://huggingface.co/aichina/cy02081/resolve/main/concept_images/cy02081_%286%29.jpg)
huggingtweets/shawarmersa
huggingtweets
gpt2
11
0
transformers
0
text-generation
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['huggingtweets']
false
true
true
3,357
<div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1620720441923878913/0Bn7lo4G_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">شاورمر | Shawarmer</div> <div style="text-align: center; font-size: 14px;">@shawarmersa</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from شاورمر | Shawarmer. | Data | شاورمر | Shawarmer | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 8 | | Short tweets | 543 | | Tweets kept | 2699 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dz0zr8g/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shawarmersa's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hjtpyyda) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hjtpyyda/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/shawarmersa') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ailyth/3_Labels
Ailyth
swin
5
215
transformers
0
image-classification
true
false
false
null
null
['Ailyth/autotrain-data-3lables']
{'emissions': 2.650072914067399}
0
0
0
0
0
0
0
['autotrain', 'vision', 'image-classification']
false
true
true
394
# Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 3341092265 - CO2 Emissions (in grams): 2.6501 ## Validation Metrics - Loss: 0.133 - Accuracy: 0.950 - Macro F1: 0.951 - Micro F1: 0.950 - Weighted F1: 0.950 - Macro Precision: 0.951 - Micro Precision: 0.950 - Weighted Precision: 0.950 - Macro Recall: 0.951 - Micro Recall: 0.950 - Weighted Recall: 0.950
MarcusLee/bert-finetuned-squad
MarcusLee
bert
12
9
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
954
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.0+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
leoleung93/Reinforce-1
leoleung93
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
286
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
mingcai/ESimCSE-ext-chinese-bert-wwm
mingcai
bert
6
3
transformers
0
feature-extraction
true
false
false
null
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
411
基于论文ESimCSE进行复现,基于STS-B训练集 + 额外数据 进行训练,在中文STS-B的验证集spermanr相关性得分为0.7201. 论文参考: @inproceedings{Wu2021ESimCSEES, title={ESimCSE: Enhanced Sample Building Method for Contrastive Learning of Unsupervised Sentence Embedding}, author={Xing Wu and Chaochen Gao and Liangjun Zang and Jizhong Han and Zhongyuan Wang and Songlin Hu}, booktitle={International Conference on Computational Linguistics}, year={2021} }
ottovoncwim/Reinforce-CartPolev1
ottovoncwim
null
6
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['CartPole-v1', 'reinforce', 'reinforcement-learning', 'custom-implementation', 'deep-rl-class']
true
true
true
289
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
jannikskytt/Pyramids
jannikskytt
null
14
2
ml-agents
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['unity-ml-agents', 'ml-agents', 'deep-reinforcement-learning', 'reinforcement-learning', 'ML-Agents-Pyramids']
false
true
true
830
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: jannikskytt/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Alex423/xlm-roberta-base-finetuned-panx-de
Alex423
xlm-roberta
12
2
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,313
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1363 - F1: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2539 | 1.0 | 525 | 0.1697 | 0.8179 | | 0.1317 | 2.0 | 1050 | 0.1327 | 0.8516 | | 0.0819 | 3.0 | 1575 | 0.1363 | 0.8627 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
nhiro3303/q-Taxi-v3
nhiro3303
null
5
0
null
0
reinforcement-learning
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['Taxi-v3', 'q-learning', 'reinforcement-learning', 'custom-implementation']
true
true
true
381
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="nhiro3303/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
juro95/xlm-roberta-finetuned-ner-cased_0.8_ratio
juro95
xlm-roberta
8
4
transformers
0
token-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,491
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juro95/xlm-roberta-finetuned-ner-cased_0.8_ratio This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0820 - Validation Loss: 0.1369 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17152, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3781 | 0.2062 | 0 | | 0.1790 | 0.1571 | 1 | | 0.1170 | 0.1408 | 2 | | 0.0820 | 0.1369 | 3 | ### Framework versions - Transformers 4.25.1 - TensorFlow 2.6.5 - Datasets 2.3.2 - Tokenizers 0.13.2
vaibhav9/mini5-theme1
vaibhav9
bert
12
5
transformers
0
question-answering
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,175
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini5-theme1 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 12 | 1.0640 | | No log | 2.0 | 24 | 0.9881 | | No log | 3.0 | 36 | 0.9619 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2