modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
BigTooth/Megumin-v0.2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## eimiss/EimisAnimeDiffusion_1.0v This repository contains the models from [eimiss/EimisAnimeDiffusion_1.0v](https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/eimiss-EimisAnimeDiffusion_1.0v-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/eimiss-EimisAnimeDiffusion_1.0v-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
BlightZz/MakiseKurisu
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-03-07T07:16:52Z
--- license: mit tags: - generated_from_trainer datasets: - ccmatrix model-index: - name: alirezamsh-small100-en-pl-yhavinga-ccmatrix-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # alirezamsh-small100-en-pl-yhavinga-ccmatrix-finetune This model is a fine-tuned version of [alirezamsh/small100](https://huggingface.co/alirezamsh/small100) on the ccmatrix dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
Bloodwarrior/Chikfalay
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ViditRaj/Simple_BERT_Ads_Classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ViditRaj/Simple_BERT_Ads_Classifier This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6739 - Validation Loss: 0.6679 - Train Accuracy: 0.6119 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 465, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6736 | 0.6682 | 0.6119 | 0 | | 0.6712 | 0.6680 | 0.6119 | 1 | | 0.6728 | 0.6679 | 0.6119 | 2 | | 0.6688 | 0.6679 | 0.6119 | 3 | | 0.6739 | 0.6679 | 0.6119 | 4 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Brona/poc_de
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-04T13:21:42Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -13.46 +/- 92.01 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 300000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'numan966/LunarLander-v2_scrtach' 'batch_size': 512 'minibatch_size': 128} ```
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: de datasets: - lmqg/qg_dequad pipeline_tag: text2text-generation tags: - question generation widget: - text: "Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls <hl> wird die Signalübertragung stark gedämpft. <hl>" example_title: "Question Generation Example 1" - text: "das erste weltweit errichtete Hermann Brehmer <hl> 1855 <hl> im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen)." example_title: "Question Generation Example 2" - text: "Er muss Zyperngrieche sein und wird direkt für <hl> fünf Jahre <hl> gewählt (Art. 43 Abs. 1 der Verfassung) und verfügt über weitreichende Exekutivkompetenzen." example_title: "Question Generation Example 3" model-index: - name: vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_dequad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.35 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 8.49 - name: METEOR (Question Generation) type: meteor_question_generation value: 7.3 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 71.98 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 50.07 --- # Model Card of `vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg` This model is fine-tuned version of [vocabtrimmer/mt5-base-trimmed-de-15000](https://huggingface.co/vocabtrimmer/mt5-base-trimmed-de-15000) for question generation task on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [vocabtrimmer/mt5-base-trimmed-de-15000](https://huggingface.co/vocabtrimmer/mt5-base-trimmed-de-15000) - **Language:** de - **Training data:** [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="de", model="vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg") # model prediction questions = model.generate_q(list_context="das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''Görbersdorf'' (heute Sokołowsko, Polen).", list_answer="1855") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg") output = pipe("Empfangs- und Sendeantenne sollen in ihrer Polarisation übereinstimmen, andernfalls <hl> wird die Signalübertragung stark gedämpft. <hl>") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 71.98 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_1 | 8.9 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_2 | 3.03 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_3 | 0.94 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | Bleu_4 | 0.35 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | METEOR | 7.3 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | MoverScore | 50.07 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | | ROUGE_L | 8.49 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_dequad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: None - model: vocabtrimmer/mt5-base-trimmed-de-15000 - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-base-trimmed-de-15000-dequad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="emre06c/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## prompthero/openjourney This repository contains the models from [prompthero/openjourney](https://huggingface.co/prompthero/openjourney) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/prompthero/openjourney) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/prompthero-openjourney-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/prompthero-openjourney-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
Buntan/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.14 +/- 0.35 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AndreMitri/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Bwehfuk/Ron
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## naclbit/trinart_stable_diffusion_v2 This repository contains the models from [naclbit/trinart_stable_diffusion_v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/naclbit-trinart_stable_diffusion_v2-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/naclbit-trinart_stable_diffusion_v2-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
73
2023-03-04T13:44:43Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: JessicaHsu/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CAMeL-Lab/bert-base-arabic-camelbert-ca
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
580
2023-03-04T13:45:12Z
# Vocabulary Trimmed [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg): `vocabtrimmer/mt5-small-koquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-small-koquad-qg](https://huggingface.co/lmqg/mt5-small-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-koquad-qg | vocabtrimmer/mt5-small-koquad-qg-trimmed | |:---------------------------|:---------------------------|:-------------------------------------------| | parameter_size_full | 300,165,504 | 119,179,648 | | parameter_size_embedding | 256,103,424 | 75,117,568 | | vocab_size | 250,101 | 73,357 | | compression_rate_full | 100.0 | 39.7 | | compression_rate_embedding | 100.0 | 29.33 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: RL_Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.67 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AndreMitri/RL_Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2023-03-04T13:50:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg_2 model-index: - name: greek-mt5-4ep-512 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # greek-mt5-4ep-512 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the squad_modified_for_t5_qg_2 dataset. It achieves the following results on the evaluation set: - Loss: 1.2918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.7423 | 0.34 | 100 | 1.5884 | | 1.916 | 0.67 | 200 | 1.4395 | | 1.8001 | 1.01 | 300 | 1.3888 | | 1.7045 | 1.35 | 400 | 1.3651 | | 1.6636 | 1.69 | 500 | 1.3388 | | 1.6221 | 2.03 | 600 | 1.3208 | | 1.5904 | 2.36 | 700 | 1.3144 | | 1.5694 | 2.7 | 800 | 1.3106 | | 1.5576 | 3.04 | 900 | 1.3017 | | 1.5428 | 3.38 | 1000 | 1.2966 | | 1.5243 | 3.72 | 1100 | 1.2918 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-da
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
449
null
--- license: creativeml-openrail-m --- !! This has already Noise Offset baked in, it may interfere with your other noise offset sources The token is `hrrsks`, use also with the tag `horror \(theme\)` (and stuff like `night, darkness, dark, black \(theme\) for darkness). Trained on 500ish images with the horror tag that I skimmed through - it won't output anything particular since the dataset is quite abstract and varied - but it does seem to create better creepy pictures in my opinion. Trained on a certain model that's probably in the model you're using - should be good to go on other models as well I hope. Epoch 8 is best bet probably but it's hard to tell what will work best on other models! <center><img src="https://i.imgur.com/TR87a7m.png"width="100%"/></center> <center><img src="https://i.imgur.com/yYB0fJC.png"width="55%"/></center> <center><img src="https://i.imgur.com/LHoZXvL.png"width="100%"/></center> <center><img src="https://i.imgur.com/sVPiU1K.png"width="55%"/></center>
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2023-03-04T13:59:08Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- <b>Introduction:</b> This model was trained from the ground up using Stable Tuner's fine-tuning method and utilizing contrast fix for darker darks and bolder colors. The Dataset contains 4900 images trained to 35 epochs. File Name is CharHelper Fine-Tuned.safetensors. Do not forget to download the yaml file and place it in the same directory.<br /> ## Usage: ## IMPORTANT: Because of the nature of the fine-tuning method, this model is sensitive with the CFG Scale. Photorealism tends to like a <b>LOW CFG Scale</b>. Best result can be found between <b>3 and 7</b>. Some subjects that are complex like robots like a higher dfg, while photorealism is mostly achieved with a CFG Scale of 3 or 4.</b> <b>Use Auto for the vae in settings. If you are using a vae based on a SDv1.5 model, you may not get the best results.</b> <br /> CharHelper Fined-Tuned was trained all at once which means the keywords all have more power to them than the previous CharHelper models. CharHelper Fine-Tuned doesn't need keywords but includes them and they can be mixed and matched together in order to achieve a multitude of different styles. Some Keywords were changed slightly from the last version. <b>Keywords:</b> <b>Character Styles:</b> CHV3CBigChief, CHV3CBoxer, CHV3CUrban, CHV3COrc, CHV3CGanesh, CHV3CGolem,CHV3CCyberpunk, CHV3CSamurai, CHV3CRobot, CHV3CZombie, CHV3CBird, CHV3MDragon, CHV3CKnight, CHV3CWizard, CHV3CBarb, CHV3CVehicle, CHV3CTroll, CHV3CReaper, CHV3CRogue, CHV3CAlien <b>Scenery/Styles:</b> CHV3SDark, CHV3SUrban, CHV3SEldritch, CHV3SLighthouse, CHV3SCute, CHV3SMacro, CHV3SSciFi, CHV3SWorld ## Examples: ![Shimmering Details](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00676-1256750850-a%20realistic%20detail%20of%20a%20close%20up%20of%20a%20woman%20with%20blue%20makeup%20on%20her%20face%20in%20the%20dark%2C%20CHV3SDark%2C%20dark%20night%20time%20photo%2C%20taken%20in.png) <b>Shimmering Details</b> a realistic detail of a close up of a woman with blue makeup on her face in the dark, CHV3SDark, dark night time photo, taken in darkness, macro details, glowing blue face, dark skin, femme on a galactic shore, dark blue skin, color portrait, blue holographic face, cosmic girl, Professional, masterpiece, commissioned Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 3, Seed: 1256750850, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![SciFi Creatures](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00718-3489145082-a%20realistic%20detail%20of%20a%20blue%20skinned%20alien%2C%20dark%20supervillain%2C%208k%2C%20epic%20character%20art%2C%20Professional%2C%20masterpiece%2C%20commissioned.png) <b>Aliens</b> a realistic detail of a blue skinned alien, dark supervillain, 8k, epic character art, Professional, masterpiece, commissioned Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 7, Seed: 3489145082, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Creepy Clown Ladies](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00079-912489906-a%20realistic%20detail%20of%20a%20very%20creepy%20zombie%20clown%20lady%2C%20wearing%20ornate%20streetwear%2C%20beautiful%2C%20detailed%20portrait%2C%20complexity%2C%204k%2C.png) <b>Creepy Clown Ladies</b> a realistic detail of a very creepy zombie clown lady, wearing ornate streetwear, beautiful, detailed portrait, complexity, 4k, concept art, sharp focus, volumetric lighting, cinematic lighting, studio quality Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 5.5, Seed: 912489906, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Big Chiefs](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01703-2798464398-an%20analog%20photo%20of%20a%20man%20wearing%20a%20colorful%20feathered%20costume%20with%20ornate%20patterns%20of%20beads%20and%20colorful%20jewels%20at%20a%20carnival%20ce.png) <b>Big Chiefs</b> an analog photo of a man wearing a colorful feathered costume with ornate patterns of beads and colorful jewels at a carnival celebration, CHV3CBigChief, fixed in post, color corrected, Professional, masterpiece, commissioned, attractive face, facial expression, professional hands, professional anatomy Negative prompt: smiling, face paint, long hair, crossed eyes, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 3.5, Seed: 2798464398, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Robotic Spiders](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00920-4212360837-Steampunk%20cybernetic%20biomechanical%20jumping%20spider%2C%20very%20coherent%20symmetrical%20artwork%2C%20CHV3CRobot%2C%20CHV3CVehicle%2C%20CHV3SMacro%2C%20Macr.png) <b>Robotic Spiders</b> Steampunk cybernetic biomechanical jumping spider, very coherent symmetrical artwork, CHV3CRobot, CHV3CVehicle, CHV3SMacro, Macro details, focus stacking, realistic render, 8k, micro detail, elegant, highly detailed, centered, smooth, sharp focus, artgerm, tomasz alen kopera, wlop Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 7, Seed: 4212360837, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Cybernetic Andriods](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00775-3438218591-a%20woman%20with%20tattoos%20and%20a%20face%20mask%2C%20CHV3CCyberpunk%2C%20portrait%20of%20a%20cyberpunk%20cyborg%2C%20portrait%20of%20a%20cyborg%2C%20cyborg%20woman%2C%20cyborg.png) <b>Cybernetic Andriods</b> a woman with tattoos and a face mask, CHV3CCyberpunk, portrait of a cyberpunk cyborg, portrait of a cyborg, cyborg woman, cyborg girl, cute cyborg girl, portrait of a cyberpunk machine, cyberpunk skeleton, cyberpunk face Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 4.5, Seed: 3438218591, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Cute Rubber Duckies](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00610-1139349539-Shiny%20gemstone%20in%20the%20shape%20of%20a%20rubber%20duck%20floating%20in%20a%20pool%20of%20colorful%20perfume%2C%20liquid%20ripples%2C%20waves%2C%20water%20droplets%2C%20phot.png) <b>Cute Rubber Duckies</b> Shiny gemstone in the shape of a rubber duck floating in a pool of colorful perfume, liquid ripples, waves, water droplets, photorealism, mystical, enigmatic, digital oil painting, trending on artstation, Professional, masterpiece, commissioned Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, nfixer Steps: 10, Sampler: DPM++ SDE, CFG scale: 4, Seed: 1139349539, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Big Cheif Ganesha](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/02005-2766758959-Ganesh%20in%20an%20elaborate%20feathered%20costume%20with%202%20arms%2C%20anthropomorphic%20elephant%20Shinigami%20at%20a%20shrine%2C%20a%20realistic%20detail%2C%20CHV3CS.png) <b>Big Cheif Ganesh</b> Ganesh in an elaborate feathered costume with 2 arms, anthropomorphic elephant Shinigami at a shrine, a realistic detail, CHV3CSamurai, CHV3CBigChief, CHV3CGanesh, Professional, masterpiece, commissioned, professional hands, professional anatomy Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 4, Seed: 2766758959, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Astronauts](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01586-3046156075-a%20professional%20Analog%20photo%20of%20a%20female%20space%20astronaut%20wearing%20an%20blue%20and%20white%20space%20suit%20exploring%20a%20river%20in%20a%20dark%20mossy%20c.png) <b>Astronauts</b> a professional Analog photo of a female space astronaut wearing an blue and white space suit exploring a river in a dark mossy canyon on another planet, helmet, medium shot portrait, gold tinted face shield, (dark atmosphere), haze, halation, bloom, dramatic atmosphere, sci-fi movie still Negative prompt: crossed eyes, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 4, Seed: 3046156075, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Zombies](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00814-2922910579-a%20realistic%20detail%20of%20a%20dark%20close-up%20of%20the%20face%20of%20a%20creepy%20haunting%20undead%20zombie%2C%20CHV3CZombie%2C%20horror%20concept%20art%2C%20zombified.png) <b>Zombies</b> a realistic detail of a dark close-up of the face of a creepy haunting undead zombie, CHV3CZombie, horror concept art, zombified mutant flesh creature, Artwork by the walking dead, Professional, masterpiece, commissioned, wojtek fus, stefan gesell, Negative prompt: symmetry, framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 4.5, Seed: 2922910579, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Dark Neon Cyberpunks](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01072-2772342268-a%20beautiful%20geisha%20wearing%20a%20kabuki%20mask%2C%20CHV3CSamurai%20elegant%20neon%20light%20tribal%20armor%2C%20shikigami%2C%20CHV3SDark%20dark%20background%2C%20cy.png) <b>Dark Neon Cyberpunks</b> a beautiful geisha wearing a kabuki mask, CHV3CSamurai elegant neon light tribal armor, shikigami, CHV3SDark dark background, cyberpunk darksynth, Professional, masterpiece, commissioned, professional hands, professional anatomy, muted saturation Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 5, Seed: 2772342268, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Dark Neon Robots](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01096-3588684930-a%20futuristic%20cybernetic%20robot%20wearing%20neon%20samurai%20armor%2C%20dark%20background%2C%20vaporware%2C%20cyberpunk%20darksynth%2C%20Professional%2C%20masterp.png) <b>Dark Neon Robots</b> a futuristic cybernetic robot wearing neon samurai armor, dark background, vaporware, cyberpunk darksynth, Professional, masterpiece, commissioned, muted saturation, artwork by daft punk Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 3.5, Seed: 3588684930, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Dramatic Lighting](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00652-1111180199-a%20realistic%20portrait%20of%20a%20beautiful%20woman%20holding%20a%20paper%20boat%20lantern%20in%20the%20dark%2C%20CHV3SDark%2C%20photo%20taken%20at%20night%2C%20on%20a%20dark%20b.png) <b>Dramatic Lighting</b> a realistic portrait of a beautiful woman holding a paper boat lantern in the dark, CHV3SDark, photo taken at night, on a dark background, floating lanterns, unsplash contest winning photo, shot with sigma f/ 4.2 Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 5, Seed: 1111180199, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Big Chief Bears](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01165-338610140-a%20n%20illustrated%20medium%20shot%20portrait%20of%20an%20anthropomorphic%20dire%20wolf%20in%20a%20colorful%20elaborate%20feathered%20costume%20with%20ornate%20detai.png) <b>Big Chief Bears</b> a n illustrated medium shot portrait of an anthropomorphic dire wolf in a colorful elaborate feathered costume with ornate details, anime style, CHV3CBigChief, warhammer 40k, octane, bling, Professional, masterpiece, commissioned, at a comic-con, artwork by wlop and loish Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 4, Seed: 338610140, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Artistic Landscapes](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01270-45256504-a%20colorful%20vector%20illustration%20of%20a%20neon%20temple%20with%20an%20elaborate%20Torana%20gateway%20in%20absolute%20darkness%20on%20a%20small%20island%20at%20night.png) <b>Artistic Landscapes</b> a colorful vector illustration of a neon temple with an elaborate Torana gateway in absolute darkness on a small island at night with colorful neon star trails, black shadows, clear sky with professional star trails, high antialiasing, night, cliffside, crashing waves, highlands, farm, crisp clean shapes, mountains, serene landscape, neon inkpunk color scheme, painting of a listing for a realty website, artwork by studio ghibli, spirited away Negative prompt: cartoon, painting, painted, drawn, drawing, anime, longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality Steps: 10, Sampler: DPM++ SDE, CFG scale: 5.5, Seed: 45256504, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Knights](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00616-241022433-Diablo%20action%20game%20cyborg%20viking%2C%20highly%20detailed%2C%20sharp%20focus%2C%20cinematic%20lighting%2C%20art%2C%20octane%20render%2C%20unreal%20engine%20lumen%2C%20ver.png) <b>Knights</b> Diablo action game cyborg viking, highly detailed, sharp focus, cinematic lighting, art, octane render, unreal engine lumen, very coherent. cinematic, hyper realism, high detail, octane render, 8k, Professional, masterpiece, commissioned Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy, nfixer Steps: 10, Sampler: DPM++ SDE, CFG scale: 6, Seed: 241022433, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Fighters](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00282-3289278897-CHV3CKBoxer%2C%20a%20realistic%20detail%20of%20a%20close%20up%20of%20a%20man%20wearing%20vibrant%20boxing%20gloves%20is%20in%20a%20boxing%20ring%2C%20photograph%20by%20Esther%20L.png) <b>Fighters</b> CHV3CKBoxer, a realistic detail of a close up of a man wearing vibrant boxing gloves is in a boxing ring, photograph by Esther Lin, posing for a fight, boxing stance, Professional, masterpiece, commissioned, attractive face, facial expression, professional anatomy Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 4.5, Seed: 3289278897, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Illustrated Characters](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00975-3745736625-A%20medium%20profile%20shot%20of%20an%20anthropomorphic%20evil%20looking%20furry%20bear%20monster%20in%20heavy%20CHV3CKnight%20armor%2C%20hyper%20realistic%2C%20extreme.png) <b>Illustrated Characters</b> A medium profile shot of an anthropomorphic evil looking furry bear monster in heavy CHV3CKnight armor, hyper realistic, extremely detailed, 8k wallpaper, Professional, masterpiece, commissioned, flat shading, ink punk, thick pastel paint, thick pen lines, attractive face, facial expression, professional hands, professional anatomy Negative prompt: over-saturated, over-exposed, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 5.5, Seed: 3745736625, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Stylish Photorealism](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/01569-2814225442-a%20professional%20Analog%20photo%20of%20a%20medium%20shot%20of%20beautiful%20urban%20model%20wearing%20Coco%20Chanel%20out%20at%20night%20in%20the%20city%2C%20armani%20fur%20c.png) <b>Stylish Photorealism</b> a professional Analog photo of a medium shot of beautiful urban model wearing Coco Chanel out at night in the city, armani fur coat, nikon D5600, 35mm lens, Professional, masterpiece, commissioned, attractive face, facial expression, fixed in post, color corrected Negative prompt: crossed eyes, amateur, extra limbs, extra barrel, b&w, close-up, duplicate, mutilated, extra fingers, mutated hands, deformed, blurry, bad proportions, extra limbs, cloned face, out of frame, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck, tripod, tube, ugly, tiling, poorly drawn hands, poorly drawn feet, poorly drawn face, out of frame, mutation, mutated, extra limbs, extra legs, extra arms, disfigured, deformed, cross-eye, body out of frame, blurry, bad art, bad anatomy Steps: 10, Sampler: DPM++ SDE, CFG scale: 3.5, Seed: 2814225442, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3 ![Futuristic Masks](https://huggingface.co/ManglerFTW/CharHelper_Fine-Tuned/resolve/main/images/00002-4242822040-tribal%20mask%20in%20wakandan%20style%20cyberpunk%2C%20ultra%20realistic%2C%20concept%20art%2C%20intricate%20details%2C%20eerie%2C%20horror%2C%20highly%20detailed%2C%20photor.png) <b>Futuristic Masks</b> tribal mask in wakandan style cyberpunk, ultra realistic, concept art, intricate details, eerie, horror, highly detailed, photorealistic, octane render, 8 k, unreal engine. art by artgerm and greg rutkowski and alphonse mucha, Professional, masterpiece, commissioned Negative prompt: framed, cropped, over-exposed, over-saturated, amateur, (b&w), (close-up), (duplicate), (deformed), blurry, (bad proportions), gross proportions, ugly, tiling, poorly drawn, mutation, mutated, disfigured, deformed, out of frame, blurry, bad art, text, logo, signature, watermark, cross-eyes Steps: 10, Sampler: DPM++ SDE, CFG scale: 7, Seed: 4242822040, Size: 768x896, Model hash: 4812a6e5a5, ENSD: 3
CAMeL-Lab/bert-base-arabic-camelbert-mix-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,860
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## Linaqruf/anything-v3.0 This repository contains the models from [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/Linaqruf/anything-v3.0) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/Linaqruf-anything-v3.0-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/Linaqruf-anything-v3.0-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
132
2023-03-04T14:05:46Z
--- license: openrail++ tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## stabilityai/stable-diffusion-2-1-base This repository contains the models from [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/stabilityai-stable-diffusion-2-1-base-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/stabilityai-stable-diffusion-2-1-base-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## wavymulder/Analog-Diffusion This repository contains the models from [wavymulder/Analog-Diffusion](https://huggingface.co/wavymulder/Analog-Diffusion) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/wavymulder/Analog-Diffusion) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/wavymulder-Analog-Diffusion-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/wavymulder-Analog-Diffusion-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
855
null
--- license: openrail++ tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## stabilityai/stable-diffusion-2-1 This repository contains the models from [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/stabilityai/stable-diffusion-2-1) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/stabilityai-stable-diffusion-2-1-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/stabilityai-stable-diffusion-2-1-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## andite/anything-v4.0 This repository contains the models from [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/andite/anything-v4.0) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/andite-anything-v4.0-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/andite-anything-v4.0-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
2023-03-04T14:19:18Z
--- license: other tags: - stable-diffusion - text-to-image - openvino --- # OpenVINO Stable Diffusion ## lambdalabs/sd-pokemon-diffusers This repository contains the models from [lambdalabs/sd-pokemon-diffusers](https://huggingface.co/lambdalabs/sd-pokemon-diffusers) converted to OpenVINO, for accelerated inference on CPU or Intel GPU with OpenVINO's integration into Optimum: [optimum-intel](https://github.com/huggingface/optimum-intel#openvino). The model weights are stored with FP16 precision, which reduces the size of the model by half. Please check out the [source model repository](https://huggingface.co/lambdalabs/sd-pokemon-diffusers) for more information about the model and its license. To install the requirements for this demo, do `pip install "optimum-intel[openvino, diffusers]"`. This installs all the necessary dependencies, including Transformers and OpenVINO. For more detailed steps, please see this [installation guide](https://github.com/helena-intel/optimum-intel/wiki/OpenVINO-Integration-Installation-Guide). The simplest way to generate an image with stable diffusion takes only two lines of code, as shown below. The first line downloads the model from the Hugging Face hub (if it has not been downloaded before) and loads it; the second line generates an image. ```python from optimum.intel.openvino import OVStableDiffusionPipeline stable_diffusion = OVStableDiffusionPipeline.from_pretrained("helenai/lambdalabs-sd-pokemon-diffusers-ov") images = stable_diffusion("a random image").images ``` The following example code uses static shapes for even faster inference. Using larger image sizes will require more memory and take longer to generate. If you have an 11th generation or later Intel Core processor, you can use the integrated GPU for inference, and if you have an Intel discrete GPU, you can use that. Add the line `stable_diffusion.to("GPU")` before `stable_diffusion.compile()` in the example below. Model loading will take some time the first time, but will be faster after that, because the model will be cached. On GPU, for stable diffusion only static shapes are supported at the moment. ```python from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionPipeline batch_size = 1 num_images_per_prompt = 1 height = 256 width = 256 # load the model and reshape to static shapes for faster inference model_id = "helenai/lambdalabs-sd-pokemon-diffusers-ov" stable_diffusion = OVStableDiffusionPipeline.from_pretrained(model_id, compile=False) stable_diffusion.reshape( batch_size=batch_size, height=height, width=width, num_images_per_prompt=num_images_per_prompt) stable_diffusion.compile() # generate image! prompt = "a random image" images = stable_diffusion(prompt, height=height, width=width, num_images_per_prompt=num_images_per_prompt).images images[0].save("result.png") ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
574
null
# Vocabulary Trimmed [lmqg/mt5-base-koquad-qg](https://huggingface.co/lmqg/mt5-base-koquad-qg): `vocabtrimmer/mt5-base-koquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-base-koquad-qg](https://huggingface.co/lmqg/mt5-base-koquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-base-koquad-qg | vocabtrimmer/mt5-base-koquad-qg-trimmed | |:---------------------------|:--------------------------|:------------------------------------------| | parameter_size_full | 582,384,384 | 310,905,600 | | parameter_size_embedding | 384,155,136 | 112,676,352 | | vocab_size | 250,101 | 73,357 | | compression_rate_full | 100.0 | 53.38 | | compression_rate_embedding | 100.0 | 29.33 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
CAUKiel/JavaBERT
[ "pytorch", "safetensors", "bert", "fill-mask", "code", "arxiv:2110.10404", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
388
null
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- # RacyMixV1 Merge by Weighted Sum <strong>*PastelMix 0.6*</strong> + <strong>*RacyV1 0.4*</strong>(I forgot the recipe) vae:Recommend <strong>kl-f8-anime2</strong>(https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt) Negative prompt: <strong>EasyNegative</strong>(https://huggingface.co/datasets/gsdf/EasyNegative) The generation of hands may be slightly unstable, please adjust the negative prompt yourself If no specific background is specified, there is a high probability of generating a city or a supermarket. # Examples ``` 1girl, sarong bikini nail polish skindentation,cowboy shot, beach, sunlight, blue sky, Negative prompt: EasyNegative, Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3550464031, Size: 512x768, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires upscaler: Latent (nearest-exact) ``` <img src="https://i.imgur.com/DeavyG1.png" width="512" height="768"> <br> ``` ((perfect details, highres, ultra-detailed, illustration)), Hindu mythology, Chandra, deity, male, serene expression, crescent moon on forehead, white complexion, four arms, holding conch shell and discus, lotus flower, cosmic background, stars, peaceful Negative prompt: EasyNegative, Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 3352669632, Size: 512x768, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Hires upscale: 1.5, Hires upscaler: Latent (nearest-exact) ``` <img src="https://i.imgur.com/PFzyRrp.png" width="512" height="768"> <br> ``` profile,charter Layout,full body,stand at attention,look at viewer,put down hands,fox girl,fancy clothes,detail clothes,white background, Negative prompt: (low quality, worst quality:1.4),(EasyNegative:1.4),(3 legs:1.3),(NG_DeepNegative_V1_75T:1.3), (painting by bad-artist:1.3), (negprompt5:1.2), (bad-image-v2-39000:1.3), lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 25, Sampler: DPM++ SDE Karras, CFG scale: 8, Seed: 1954153806, Size: 512x768, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B ``` <img src="https://i.imgur.com/Jdc2VQY.png" width="512" height="768"> <br>
CL/safe-math-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-04T14:27:20Z
--- license: apache-2.0 language: - en metrics: - f1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # switch-base-8-finetuned This model is a fine-tuned version of [google/switch-base-8](https://huggingface.co/google/switch-base-8) on the SemEval-2018-Task-2 emojis english dataset. It achieves the following results on the evaluation set: - Accuracy: 48.040 % - Mac-F1: 33.239 % # Model description ## More information needed - **Model type:** Language model - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Related Models:** [All Switch Transformers Checkpoints](https://huggingface.co/models?search=switch) - **Original Checkpoints:** [All Original Switch Transformers Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints) - **Resources for more information:** - [Research paper](https://arxiv.org/pdf/2101.03961.pdf) - [GitHub Repo](https://github.com/google-research/t5x) - [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-4 - train_batch_size: 464 - eval_batch_size: 512 - seed: 42 - num_epochs: 30 ### Testing results | SemEval Testing Data | accuracy | Mac-F1 | |:---------------------------------------------------:|:------------:|:----------:| | "Tubingen-Oslo" First SemEval Team | 47.09% | 35.99% | | [switch-base-8-finetuned-SemEval-2018-emojis-cen-1](https://huggingface.co/Karim-Gamal/switch-base-8-finetuned-SemEval-2018-emojis-cen-1) | 48.040% | 33.239% | | [switch-base-8-finetuned-SemEval-2018-emojis-cen-2](https://huggingface.co/Karim-Gamal/switch-base-8-finetuned-SemEval-2018-emojis-cen-2) | 50.174% | 36.660% | | [switch-base-8-finetuned-SemEval-2018-emojis-IID-Fed](https://huggingface.co/Karim-Gamal/switch-base-8-finetuned-SemEval-2018-emojis-IID-Fed) | 50.750% | 37.355% | ## Google colab to test the models on SemEval test dataset : [The Notebook](https://colab.research.google.com/drive/1CJWfCyT8ofz1xg6W_F5YCMyTpCs36_PP?usp=sharing) ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Tokenizers 0.13.2
CLTL/icf-domains
[ "pytorch", "roberta", "nl", "transformers", "license:mit", "text-classification" ]
text-classification
{ "architectures": [ "RobertaForMultiLabelSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: ViditRaj/xlmROBERTA_Ads_Classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ViditRaj/xlmROBERTA_Ads_Classifier This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0768 - Validation Loss: 0.2352 - Train Accuracy: 0.9348 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 465, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.4309 | 0.2186 | 0.9249 | 0 | | 0.1939 | 0.1786 | 0.9348 | 1 | | 0.1383 | 0.1973 | 0.9348 | 2 | | 0.1000 | 0.2296 | 0.9348 | 3 | | 0.0768 | 0.2352 | 0.9348 | 4 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
CLTL/icf-levels-ber
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/reklambox2-64-26-xlm This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/reklambox2-64-26-xlm") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
CLTL/icf-levels-mbw
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 485.50 +/- 180.56 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga haidlir -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga haidlir -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga haidlir ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
CSResearcher/TestModel
[ "license:mit" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-Slippery_v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.48 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="kraken2404/q-FrozenLake-v1-8x8-Slippery_v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
CZWin32768/xlm-align
[ "pytorch", "xlm-roberta", "fill-mask", "arxiv:2106.06381", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 541.00 +/- 109.33 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga varevshatyan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga varevshatyan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga varevshatyan ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
CallumRai/HansardGPT2
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-15000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-15000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 221,273,856 | | parameter_size_embedding | 384,172,032 | 23,044,608 | | vocab_size | 250,112 | 15,003 | | compression_rate_full | 100.0 | 37.99 | | compression_rate_embedding | 100.0 | 6.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 15000 | 2 |
CalvinHuang/mt5-small-finetuned-amazon-en-es
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "summarization", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
summarization
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/reklambox2-64-30-xlm This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/reklambox2-64-30-xlm") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Cameron/BERT-Jigsaw
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole4 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Cameron/BERT-SBIC-targetcategory
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: ViditRaj/BERT_LATEST_Ads_Classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ViditRaj/BERT_LATEST_Ads_Classifier This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0433 - Validation Loss: 0.2119 - Train Accuracy: 0.9320 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 465, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3301 | 0.2277 | 0.9150 | 0 | | 0.1607 | 0.1973 | 0.9235 | 1 | | 0.0958 | 0.1931 | 0.9348 | 2 | | 0.0592 | 0.2309 | 0.9292 | 3 | | 0.0433 | 0.2119 | 0.9320 | 4 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Cameron/BERT-jigsaw-severetoxic
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-30000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-30000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 244,310,784 | | parameter_size_embedding | 384,172,032 | 46,081,536 | | vocab_size | 250,112 | 30,001 | | compression_rate_full | 100.0 | 41.95 | | compression_rate_embedding | 100.0 | 12.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 30000 | 2 |
Cameron/BERT-mdgender-convai-ternary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
Access to model lisastf/bert-base-uncased_tuned_projet is restricted and you are not in the authorized list. Visit https://huggingface.co/lisastf/bert-base-uncased_tuned_projet to ask for access.
Cameron/BERT-mdgender-wizard
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: split metrics: - name: Accuracy type: accuracy value: 0.928 - name: F1 type: f1 value: 0.9279822791628913 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2142 - Accuracy: 0.928 - F1: 0.9280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8453 | 1.0 | 250 | 0.3091 | 0.9085 | 0.9057 | | 0.2485 | 2.0 | 500 | 0.2142 | 0.928 | 0.9280 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.10.3
Camzure/MaamiBot-test
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-75000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-75000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 313,430,784 | | parameter_size_embedding | 384,172,032 | 115,201,536 | | vocab_size | 250,112 | 75,001 | | compression_rate_full | 100.0 | 53.82 | | compression_rate_embedding | 100.0 | 29.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 75000 | 2 |
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-120000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-120000 | |:---------------------------|:------------------|:------------------------------------------| | parameter_size_full | 582,401,280 | 382,550,784 | | parameter_size_embedding | 384,172,032 | 184,321,536 | | vocab_size | 250,112 | 120,001 | | compression_rate_full | 100.0 | 65.69 | | compression_rate_embedding | 100.0 | 47.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 120000 | 2 |
Canadiancaleb/DialoGPT-small-jesse
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -132.17 +/- 60.61 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Jbot/LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
Canadiancaleb/jessebot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: JessicaHsu/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Canyonevo/DialoGPT-medium-KingHenry
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: darkstorm2150/Protogen_x5.8_Official_Release instance_prompt: photo of gaal woman tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - balibell/lora_ym_p These are LoRA adaption weights for darkstorm2150/Protogen_x5.8_Official_Release. The weights were trained on photo of gaal woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Capreolus/bert-base-msmarco
[ "pytorch", "tf", "jax", "bert", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
238
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-90000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-90000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 336,470,784 | | parameter_size_embedding | 384,172,032 | 138,241,536 | | vocab_size | 250,112 | 90,001 | | compression_rate_full | 100.0 | 57.77 | | compression_rate_embedding | 100.0 | 35.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 90000 | 2 |
Capreolus/birch-bert-large-car_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ja-45000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ja-45000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 267,350,784 | | parameter_size_embedding | 384,172,032 | 69,121,536 | | vocab_size | 250,112 | 45,001 | | compression_rate_full | 100.0 | 45.9 | | compression_rate_embedding | 100.0 | 17.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | 45000 | 2 |
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: ViditRaj/ROBERTA_Ads_Classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ViditRaj/ROBERTA_Ads_Classifier This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0420 - Validation Loss: 0.2158 - Train Accuracy: 0.9348 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 465, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.3509 | 0.1830 | 0.9334 | 0 | | 0.1446 | 0.1676 | 0.9320 | 1 | | 0.0980 | 0.1691 | 0.9377 | 2 | | 0.0682 | 0.1938 | 0.9334 | 3 | | 0.0420 | 0.2158 | 0.9348 | 4 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Capreolus/electra-base-msmarco
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
110
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### arabic-amera model trained by Falah.G.Salieh ## You can visit my blog: https://iraqprogrammer.wordpress.com/ ## FB: https://web.facebook.com/falahgs4ai ## Email: [email protected] With Stable Diffusion, we can now create artificial intelligence art generation images using trained images. In this model, we can create images of an Arab princess called Arabic amera in Arabic languages (اميرة عربية) as famous images, or anything you can think of Test the concept via A1111 Colab fast-Colab-A1111. # Any prompt and add arabic-amera style word: # prompts: 25yo Arabic smiling female looking at the viewer, a detailed face, attractive, full elegant dress, wavy chestnut hair, ((closeup)), perfect eyes, (interior home background), (photorealistic), intricate, highly detailed, absurd res, symmetrical, backlighting, colorful, concept art, (photography:1.5), sharp focus, illustration, award-winning, 8K.by arabic-amera style # Sample pictures of this concept: ![0](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00010-3891662576.png) ![1](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00001-3333757120.png) ![2](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00011-3891662577.png) ![3](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00020-3428872212.png) ![4](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00002-3333757120.png) ![5](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00008-2162585874.png) ![6](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00023-1702278059.png) ![7](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00003-3333757120.png) ![8](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00004-3333757120.png) ![9](https://huggingface.co/Falah/arabic-amera/resolve/main/sample_images/00009-3891662575.png)
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- tags: - generated_from_trainer model-index: - name: rugpt3small_based_on_gpt2-tat_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rugpt3small_based_on_gpt2-tat_model This model is a fine-tuned version of [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 480 | 3.4394 | | 3.6184 | 2.0 | 960 | 3.4045 | | 3.3493 | 3.0 | 1440 | 3.4002 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ru-75000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ru-75000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 313,430,784 | | parameter_size_embedding | 384,172,032 | 115,201,536 | | vocab_size | 250,112 | 75,001 | | compression_rate_full | 100.0 | 53.82 | | compression_rate_embedding | 100.0 | 29.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | 75000 | 2 |
CarlosPR/mt5-spanish-memmories-analysis
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ru-120000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ru-120000 | |:---------------------------|:------------------|:------------------------------------------| | parameter_size_full | 582,401,280 | 382,550,784 | | parameter_size_embedding | 384,172,032 | 184,321,536 | | vocab_size | 250,112 | 120,001 | | compression_rate_full | 100.0 | 65.69 | | compression_rate_embedding | 100.0 | 47.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | 120000 | 2 |
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Vocabulary Trimmed [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg): `vocabtrimmer/mt5-small-ruquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-small-ruquad-qg](https://huggingface.co/lmqg/mt5-small-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-ruquad-qg | vocabtrimmer/mt5-small-ruquad-qg-trimmed | |:---------------------------|:---------------------------|:-------------------------------------------| | parameter_size_full | 300,165,504 | 195,364,224 | | parameter_size_embedding | 256,103,424 | 151,302,144 | | vocab_size | 250,101 | 147,756 | | compression_rate_full | 100.0 | 65.09 | | compression_rate_embedding | 100.0 | 59.08 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | | 2 |
Cathy/reranking_model
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
# Vocabulary Trimmed [lmqg/mt5-base-ruquad-qg](https://huggingface.co/lmqg/mt5-base-ruquad-qg): `vocabtrimmer/mt5-base-ruquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-base-ruquad-qg](https://huggingface.co/lmqg/mt5-base-ruquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-base-ruquad-qg | vocabtrimmer/mt5-base-ruquad-qg-trimmed | |:---------------------------|:--------------------------|:------------------------------------------| | parameter_size_full | 582,384,384 | 425,182,464 | | parameter_size_embedding | 384,155,136 | 226,953,216 | | vocab_size | 250,101 | 147,756 | | compression_rate_full | 100.0 | 73.01 | | compression_rate_embedding | 100.0 | 59.08 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | | 2 |
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: Paperbag/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-base-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: greek-m2m100-4ep-384 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # greek-m2m100-4ep-384 This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8998 | 0.67 | 100 | 1.4131 | | 1.3718 | 1.35 | 200 | 1.3527 | | 1.2555 | 2.03 | 300 | 1.3175 | | 1.1075 | 2.7 | 400 | 1.3090 | | 1.0501 | 3.38 | 500 | 1.2998 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
dccuchile/albert-base-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ru-90000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ru-90000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 336,470,784 | | parameter_size_embedding | 384,172,032 | 138,241,536 | | vocab_size | 250,112 | 90,001 | | compression_rate_full | 100.0 | 57.77 | | compression_rate_embedding | 100.0 | 35.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | 90000 | 2 |
dccuchile/albert-base-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit --- [CLIP ViT-H/14 frozen xlm roberta large - LAION-5B](https://huggingface.co/laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-laion5B-s13B-b90k) model converted to HuggingFace Transformers via https://gist.github.com/calpt/8e3555bd11f1916b5169c8125117e5ee.
dccuchile/albert-large-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: flan-t5-base-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: test args: samsum metrics: - name: Rouge1 type: rouge value: 47.2663 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-samsum This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.3716 - Rouge1: 47.2663 - Rouge2: 23.5327 - Rougel: 39.6491 - Rougelsum: 43.3169 - Gen Len: 17.3907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4379 | 1.0 | 1842 | 1.3805 | 47.1438 | 23.6153 | 39.6699 | 43.5505 | 17.1197 | | 1.3559 | 2.0 | 3684 | 1.3716 | 47.2663 | 23.5327 | 39.6491 | 43.3169 | 17.3907 | | 1.2783 | 3.0 | 5526 | 1.3721 | 47.4896 | 23.7684 | 39.7733 | 43.4494 | 17.1832 | | 1.2378 | 4.0 | 7368 | 1.3757 | 47.9122 | 24.0531 | 40.2225 | 43.996 | 17.3053 | | 1.1983 | 5.0 | 9210 | 1.3751 | 47.8507 | 24.0061 | 40.231 | 43.8698 | 17.3040 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
dccuchile/albert-large-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
# Vocabulary Trimmed [google/mt5-small](https://huggingface.co/google/mt5-small): `vocabtrimmer/mt5-small-trimmed-ja` This model is a trimmed version of [google/mt5-small](https://huggingface.co/google/mt5-small) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-small | vocabtrimmer/mt5-small-trimmed-ja | |:---------------------------|:-------------------|:------------------------------------| | parameter_size_full | 300,176,768 | 172,986,752 | | parameter_size_embedding | 256,114,688 | 128,924,672 | | vocab_size | 250,112 | 125,903 | | compression_rate_full | 100.0 | 57.63 | | compression_rate_embedding | 100.0 | 50.34 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ja | vocabtrimmer/mc4_validation | text | ja | validation | | 2 |
dccuchile/albert-large-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 14.28 +/- 4.48 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Jbot/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.8.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
dccuchile/albert-large-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/albert-tiny-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: NielsV/poca-SoccerTwos-v2-83M 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/albert-tiny-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-de-120000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-de-120000 | |:---------------------------|:------------------|:------------------------------------------| | parameter_size_full | 582,401,280 | 382,550,784 | | parameter_size_embedding | 384,172,032 | 184,321,536 | | vocab_size | 250,112 | 120,001 | | compression_rate_full | 100.0 | 65.69 | | compression_rate_embedding | 100.0 | 47.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 120000 | 2 |
dccuchile/albert-xlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1156.06 +/- 321.36 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/albert-xlarge-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 language: - en tags: - jobs - skills --- A fine-tuned `SentenceTransformer` model on jobs and skills descriptions using the `Transformer-based and Sequential Denoising AutoEncoder` training method which is used mainly in tasks where you lack labelled data.
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-fr-90000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-fr-90000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 336,470,784 | | parameter_size_embedding | 384,172,032 | 138,241,536 | | vocab_size | 250,112 | 90,001 | | compression_rate_full | 100.0 | 57.77 | | compression_rate_embedding | 100.0 | 35.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | fr | vocabtrimmer/mc4_validation | text | fr | validation | 90000 | 2 |
dccuchile/albert-xlarge-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
# Vocabulary Trimmed [lmqg/mt5-small-frquad-qg](https://huggingface.co/lmqg/mt5-small-frquad-qg): `vocabtrimmer/mt5-small-frquad-qg-trimmed-fr` This model is a trimmed version of [lmqg/mt5-small-frquad-qg](https://huggingface.co/lmqg/mt5-small-frquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-frquad-qg | vocabtrimmer/mt5-small-frquad-qg-trimmed-fr | |:---------------------------|:---------------------------|:----------------------------------------------| | parameter_size_full | 300,165,504 | 178,295,168 | | parameter_size_embedding | 256,103,424 | 134,233,088 | | vocab_size | 250,101 | 131,087 | | compression_rate_full | 100.0 | 59.4 | | compression_rate_embedding | 100.0 | 52.41 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-version2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 74.20 +/- 42.22 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/albert-xxlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/reklambox2-64-32-xlm This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/reklambox2-64-32-xlm") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
dccuchile/albert-xxlarge-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 536.50 +/- 132.14 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga qxakshat -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga qxakshat -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga qxakshat ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
dccuchile/bert-base-spanish-wwm-cased-finetuned-mldoc
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-de-75000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-de-75000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 313,430,784 | | parameter_size_embedding | 384,172,032 | 115,201,536 | | vocab_size | 250,112 | 75,001 | | compression_rate_full | 100.0 | 53.82 | | compression_rate_embedding | 100.0 | 29.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 75000 | 2 |
dccuchile/bert-base-spanish-wwm-cased-finetuned-ner
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
81
null
# Vocabulary Trimmed [lmqg/mt5-base-frquad-qg](https://huggingface.co/lmqg/mt5-base-frquad-qg): `vocabtrimmer/mt5-base-frquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-base-frquad-qg](https://huggingface.co/lmqg/mt5-base-frquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-base-frquad-qg | vocabtrimmer/mt5-base-frquad-qg-trimmed | |:---------------------------|:--------------------------|:------------------------------------------| | parameter_size_full | 582,384,384 | 399,578,880 | | parameter_size_embedding | 384,155,136 | 201,349,632 | | vocab_size | 250,101 | 131,087 | | compression_rate_full | 100.0 | 68.61 | | compression_rate_embedding | 100.0 | 52.41 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | fr | vocabtrimmer/mc4_validation | text | fr | validation | | 2 |
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2000 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ko-30000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ko-30000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 244,310,784 | | parameter_size_embedding | 384,172,032 | 46,081,536 | | vocab_size | 250,112 | 30,001 | | compression_rate_full | 100.0 | 41.95 | | compression_rate_embedding | 100.0 | 12.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | 30000 | 2 |
dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
39
null
# Vocabulary Trimmed [google/mt5-small](https://huggingface.co/google/mt5-small): `vocabtrimmer/mt5-small-trimmed-ko` This model is a trimmed version of [google/mt5-small](https://huggingface.co/google/mt5-small) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-small | vocabtrimmer/mt5-small-trimmed-ko | |:---------------------------|:-------------------|:------------------------------------| | parameter_size_full | 300,176,768 | 119,178,624 | | parameter_size_embedding | 256,114,688 | 75,116,544 | | vocab_size | 250,112 | 73,356 | | compression_rate_full | 100.0 | 39.7 | | compression_rate_embedding | 100.0 | 29.33 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopterV1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 114.80 +/- 36.91 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: Developer-Karthi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
# Vocabulary Trimmed [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg): `vocabtrimmer/mt5-small-esquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-small-esquad-qg](https://huggingface.co/lmqg/mt5-small-esquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-esquad-qg | vocabtrimmer/mt5-small-esquad-qg-trimmed | |:---------------------------|:---------------------------|:-------------------------------------------| | parameter_size_full | 300,165,504 | 178,314,624 | | parameter_size_embedding | 256,103,424 | 134,252,544 | | vocab_size | 250,101 | 131,106 | | compression_rate_full | 100.0 | 59.41 | | compression_rate_embedding | 100.0 | 52.42 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | es | vocabtrimmer/mc4_validation | text | es | validation | | 2 |
dccuchile/distilbert-base-spanish-uncased-finetuned-pos
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-es-75000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-es-75000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 313,430,784 | | parameter_size_embedding | 384,172,032 | 115,201,536 | | vocab_size | 250,112 | 75,001 | | compression_rate_full | 100.0 | 53.82 | | compression_rate_embedding | 100.0 | 29.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | es | vocabtrimmer/mc4_validation | text | es | validation | 75000 | 2 |
dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ko` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ko | |:---------------------------|:------------------|:-----------------------------------| | parameter_size_full | 582,401,280 | 310,904,064 | | parameter_size_embedding | 384,172,032 | 112,674,816 | | vocab_size | 250,112 | 73,356 | | compression_rate_full | 100.0 | 53.38 | | compression_rate_embedding | 100.0 | 29.33 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
dccuchile/distilbert-base-spanish-uncased
[ "pytorch", "distilbert", "fill-mask", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
670
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-es-90000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-es-90000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 336,470,784 | | parameter_size_embedding | 384,172,032 | 138,241,536 | | vocab_size | 250,112 | 90,001 | | compression_rate_full | 100.0 | 57.77 | | compression_rate_embedding | 100.0 | 35.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | es | vocabtrimmer/mc4_validation | text | es | validation | 90000 | 2 |
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ko-45000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ko-45000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 267,350,784 | | parameter_size_embedding | 384,172,032 | 69,121,536 | | vocab_size | 250,112 | 45,001 | | compression_rate_full | 100.0 | 45.9 | | compression_rate_embedding | 100.0 | 17.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | 45000 | 2 |
CennetOguz/distilbert-base-uncased-finetuned-recipe
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - generated_from_trainer model-index: - name: LatinBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LatinBERT This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Certified-Zoomer/DialoGPT-small-rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.09 +/- 15.32 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Chaewon/mmnt_decoder_en
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
# Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed` This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed | |:---------------------------|:---------------------------|:-------------------------------------------| | parameter_size_full | 300,165,504 | 157,784,448 | | parameter_size_embedding | 256,103,424 | 113,722,368 | | vocab_size | 250,101 | 111,057 | | compression_rate_full | 100.0 | 52.57 | | compression_rate_embedding | 100.0 | 44.4 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | it | vocabtrimmer/mc4_validation | text | it | validation | | 2 |
Chakita/gpt2_mwp
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ko-60000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ko-60000 | |:---------------------------|:------------------|:-----------------------------------------| | parameter_size_full | 582,401,280 | 290,390,784 | | parameter_size_embedding | 384,172,032 | 92,161,536 | | vocab_size | 250,112 | 60,001 | | compression_rate_full | 100.0 | 49.86 | | compression_rate_embedding | 100.0 | 23.99 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ko | vocabtrimmer/mc4_validation | text | ko | validation | 60000 | 2 |
CharlieChen/feedback-bigbird
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ru` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ru | |:---------------------------|:------------------|:-----------------------------------| | parameter_size_full | 582,401,280 | 425,180,928 | | parameter_size_embedding | 384,172,032 | 226,951,680 | | vocab_size | 250,112 | 147,755 | | compression_rate_full | 100.0 | 73.0 | | compression_rate_embedding | 100.0 | 59.08 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | | 2 |
Cheatham/xlm-roberta-base-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: creativeml-openrail-m --- https://civitai.com/models/15699/keqing-or-genshin-impact-or-3in1-lora-and-locon
ChristopherA08/IndoELECTRA
[ "pytorch", "electra", "pretraining", "id", "dataset:oscar", "transformers" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: TalesLF/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Chuah/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1825.63 +/- 94.90 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CodeDanCode/SP-KyleBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
2023-03-04T20:02:36Z
--- language: en thumbnail: http://www.huggingtweets.com/darthputinkgb/1677960243051/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/771597667403038720/Y57U3bvY_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Darth Putin</div> <div style="text-align: center; font-size: 14px;">@darthputinkgb</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Darth Putin. | Data | Darth Putin | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 401 | | Short tweets | 120 | | Tweets kept | 2725 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mvrqake9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @darthputinkgb's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wg9hh2ra) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wg9hh2ra/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/darthputinkgb') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CoffeeAddict93/gpt2-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
Not official! This are diffusers weights for https://civitai.com/models/8124/a-to-zovya-rpg-artists-tools-15-and-21 Based on Stable Diffusion v1.5
CoffeeAddict93/gpt2-modest-proposal
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-it` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-it | |:---------------------------|:------------------|:-----------------------------------| | parameter_size_full | 582,401,280 | 368,811,264 | | parameter_size_embedding | 384,172,032 | 170,582,016 | | vocab_size | 250,112 | 111,056 | | compression_rate_full | 100.0 | 63.33 | | compression_rate_embedding | 100.0 | 44.4 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:| | it | vocabtrimmer/mc4_validation | text | it | validation | | 2 |
CrisLeaf/generador-de-historias-de-tolkien
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5417526808280421 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5338 - Matthews Correlation: 0.5418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5249 | 1.0 | 535 | 0.5300 | 0.4152 | | 0.3442 | 2.0 | 1070 | 0.5027 | 0.4868 | | 0.2261 | 3.0 | 1605 | 0.5338 | 0.5418 | | 0.1745 | 4.0 | 2140 | 0.7556 | 0.5360 | | 0.1258 | 5.0 | 2675 | 0.8514 | 0.5260 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
Crystal/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer model-index: - name: abOCR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # abOCR This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-04T23:26:32Z
# Vocabulary Trimmed [google/mt5-base](https://huggingface.co/google/mt5-base): `vocabtrimmer/mt5-base-trimmed-ru-105000` This model is a trimmed version of [google/mt5-base](https://huggingface.co/google/mt5-base) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | google/mt5-base | vocabtrimmer/mt5-base-trimmed-ru-105000 | |:---------------------------|:------------------|:------------------------------------------| | parameter_size_full | 582,401,280 | 359,510,784 | | parameter_size_embedding | 384,172,032 | 161,281,536 | | vocab_size | 250,112 | 105,001 | | compression_rate_full | 100.0 | 61.73 | | compression_rate_embedding | 100.0 | 41.98 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | ru | vocabtrimmer/mc4_validation | text | ru | validation | 105000 | 2 |
Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-03-04T23:32:36Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: dumbassRepoNameLocal results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="robkayinto/dumbassRepoNameLocal", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Cyrell/Cyrell
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 33.30 +/- 41.81 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
DSI/human-directed-sentiment
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: openrail++ tags: - stable-diffusion - text-to-image inference: false --- # Stable Diffusion x4 upscaler model card This model card focuses on the model associated with the Stable Diffusion Upscaler, available [here](https://github.com/Stability-AI/stablediffusion). This model is trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). ![Image](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/upscaling/merged-dog.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt). - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` ```python import requests from PIL import Image from io import BytesIO from diffusers import StableDiffusionUpscalePipeline import torch # load model and scheduler model_id = "stabilityai/stable-diffusion-x4-upscaler" pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipeline = pipeline.to("cuda") # let's download an image url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" response = requests.get(url) low_res_img = Image.open(BytesIO(response.content)).convert("RGB") low_res_img = low_res_img.resize((128, 128)) prompt = "a white cat" upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] upscaled_image.save("upsampled_cat.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
alexandrainst/da-hatespeech-detection-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,719
null
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ### How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DaisyMak/bert-finetuned-squad-accelerate-10epoch_transformerfrozen
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,907
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: AdonaiHS/SnowballTarget1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DarkWolf/kn-electra-small
[ "pytorch", "electra", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-03-05T03:47:20Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: C3_A8b_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # C3_A8b_2 This model is a fine-tuned version of [Sjdan/C3_2_1](https://huggingface.co/Sjdan/C3_2_1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Davlan/bert-base-multilingual-cased-finetuned-amharic
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
109
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1-Reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 478.90 +/- 48.44 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction