modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-30 06:28:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
536 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-30 06:28:00
card
stringlengths
11
1.01M
sam030/bert-fine-tuned-cola
sam030
2023-09-23T06:16:49Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T06:16:35Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: bert-fine-tuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4359 - Validation Loss: 0.4255 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4359 | 0.4255 | 0 | ### Framework versions - Transformers 4.33.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.11.0
tomdeore/nonymus-llm
tomdeore
2023-09-23T06:00:09Z
5
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "base_model:abhishek/llama-2-7b-hf-small-shards", "base_model:finetune:abhishek/llama-2-7b-hf-small-shards", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-23T03:27:27Z
--- base_model: abhishek/llama-2-7b-hf-small-shards tags: - generated_from_trainer model-index: - name: nonymus-llm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nonymus-llm This model is a fine-tuned version of [abhishek/llama-2-7b-hf-small-shards](https://huggingface.co/abhishek/llama-2-7b-hf-small-shards) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1 - Tokenizers 0.13.3
Sabari206/my_awesome_qa_model
Sabari206
2023-09-23T05:17:15Z
61
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-23T05:04:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: Sabari206/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sabari206/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5484 - Validation Loss: 1.6683 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.3795 | 2.0411 | 0 | | 1.7773 | 1.6683 | 1 | | 1.5484 | 1.6683 | 2 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
csocsci/mt5-base-binary-en-iiia-02c
csocsci
2023-09-23T05:12:11Z
113
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "multilingual", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-22T13:33:07Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: mit language: - multilingual --- # Model Card for mt5-base-binary-en-iiia-02c <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for binary text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents. ## Model Description The model was fine-tuned on a dataset of English Instant Messenger dialogs of Adolescents. The classification is binary and the model outputs 'positive' or 'negative': Supportive Interactions present or not. The inputs are a target utterance and its bi-directional context; it's target label that of the target utterance. - **Developed by:** Anonymous - **Language(s):** multilingual - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch # Target utterance test_texts = ['Utterance2'] # Bi-directional context of the target utterance test_text_pairs = ['Utterance1;Utterance2;Utterance3'] # Load the model and tokenizer checkpoint_path = "chi2024/mt5-base-binary-en-iiia-02c" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) # Define helper functions def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded # Run the prediction preds_txt = [predict_one(t,p) for t,p in zip(test_texts, test_text_pairs)] preds_lbl = [1 if x == 'positive' else 0 for x in preds_txt] print(preds_lbl) ```
csocsci/mt5-base-binary-cs-iiia
csocsci
2023-09-23T05:02:47Z
113
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "cs", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-22T13:14:21Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: mit language: - cs --- # Model Card for mt5-base-binary-cs-iiia <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for binary text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The model was fine-tuned on a dataset of Czech Instant Messenger dialogs of Adolescents. The classification is binary and the model outputs 'positive' or 'negative': Supportive Interactions present or not. The inputs are a target utterance and its bi-directional context; it's target label that of the target utterance. - **Developed by:** Anonymous - **Language(s):** cs - **Finetuned from:** mt5-base ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/chi2024submission - **Paper:** Stay tuned! ## Usage Here is how to use this model to classify a context-window of a dialogue: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch # Target utterance test_texts = ['Utterance2'] # Bi-directional context of the target utterance test_text_pairs = ['Utterance1;Utterance2;Utterance3'] # Load the model and tokenizer checkpoint_path = "chi2024/mt5-base-binary-cs-iiia" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint_path)\ .to("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained(checkpoint_path) # Define helper functions def verbalize_input(text: str, text_pair: str) -> str: return "Utterance: %s\nContext: %s" % (text, text_pair) def predict_one(text, pair): input_pair = verbalize_input(text, pair) inputs = tokenizer(input_pair, return_tensors="pt", padding=True, truncation=True, max_length=256).to(model.device) outputs = model.generate(**inputs) decoded = [text.strip() for text in tokenizer.batch_decode(outputs, skip_special_tokens=True)] return decoded # Run the prediction preds_txt = [predict_one(t,p) for t,p in zip(test_texts, test_text_pairs)] preds_lbl = [1 if x == 'positive' else 0 for x in preds_txt] print(preds_lbl) ```
ranajithore/stable-diffusion-v2-1-especially-trained-for-plant-cell-diagram
ranajithore
2023-09-23T04:57:45Z
39
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T04:53:08Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Stable-Diffusion-V2.1-especially-trained-for-plant-cell-diagram Dreambooth model trained by ranajithore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Mahmoud22/TheBlokeRL-llama-7B-chat-gptq
Mahmoud22
2023-09-23T04:42:26Z
0
0
peft
[ "peft", "llama", "4-bit", "gptq", "region:us" ]
null
2023-09-23T04:33:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: gptq - bits: 4 - tokenizer: None - dataset: None - group_size: 128 - damp_percent: 0.01 - desc_act: False - sym: True - true_sequential: True - use_cuda_fp16: False - model_seqlen: None - block_name_to_quantize: None - module_name_preceding_first_block: None - batch_size: 1 - pad_token_id: None - disable_exllama: False ### Framework versions - PEFT 0.5.0
LumosD/grass
LumosD
2023-09-23T04:37:49Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-09-23T04:23:47Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LumosD/grass These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
890mari/practicaNLP
890mari
2023-09-23T04:24:28Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T03:06:57Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: practicaNLP results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8455882352941176 - name: F1 type: f1 value: 0.8930390492359932 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # practicaNLP This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6320 - Accuracy: 0.8456 - F1: 0.8930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5255 | 1.09 | 500 | 0.4371 | 0.8431 | 0.8885 | | 0.3452 | 2.18 | 1000 | 0.6320 | 0.8456 | 0.8930 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
hemanth11/opus-mt-en-fr
hemanth11
2023-09-23T04:19:27Z
108
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:hind_encorp", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-09-22T13:57:43Z
--- tags: - translation - generated_from_trainer datasets: - hind_encorp model-index: - name: opus-mt-en-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-fr This model was trained from scratch on the hind_encorp dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
OpenDILabCommunity/LunarLander-v2-SAC
OpenDILabCommunity
2023-09-23T03:50:28Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "LunarLander-v2", "en", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-04-14T05:02:58Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - LunarLander-v2 benchmark_name: OpenAI/Gym/Box2d task_name: LunarLander-v2 pipeline_tag: reinforcement-learning model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/Box2d-LunarLander-v2 type: OpenAI/Gym/Box2d-LunarLander-v2 metrics: - type: mean_reward value: 277.21 +/- 17.51 name: mean_reward --- # Play **LunarLander-v2** with **SAC** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **SAC** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = SACAgent( env_id="LunarLanderContinuous-v2", exp_name="lunarlander-sac", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-SAC") # Instantiate the agent agent = SACAgent( env_id="LunarLanderContinuous-v2", exp_name="LunarLander-v2-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import SACAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = SACAgent(env_id="LunarLanderContinuous-v2", exp_name="LunarLander-v2-SAC") # Train the agent return_ = agent.train(step=int(4000000), collector_env_num=8, evaluator_env_num=8) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="LunarLander-v2", algo_name="SAC", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html", installation_guide="pip3 install DI-engine[common_env]", usage_file_by_git_clone="./sac/lunarlander_sac_deploy.py", usage_file_by_huggingface_ding="./sac/lunarlander_sac_download.py", train_file="./sac/lunarlander_sac.py", repo_id="OpenDILabCommunity/LunarLander-v2-SAC", create_repo=False ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 260, 'n_evaluator_episode': 8, 'env_id': 'LunarLanderContinuous-v2', 'collector_env_num': 4, 'evaluator_env_num': 8, 'act_scale': True }, 'policy': { 'model': { 'twin_critic': True, 'action_space': 'reparameterization', 'obs_shape': 8, 'action_shape': 2 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 256, 'batch_size': 128, 'learning_rate_q': 0.001, 'learning_rate_policy': 0.0003, 'learning_rate_alpha': 0.0003, 'target_theta': 0.005, 'discount_factor': 0.99, 'alpha': 0.2, 'auto_alpha': True, 'log_space': True, 'target_entropy': None, 'ignore_done': False, 'init_w': 0.003 }, 'collect': { 'collector': {}, 'n_sample': 256, 'unroll_len': 1, 'collector_logit': False }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'figure_path': None, 'cfg_type': 'InteractionSerialEvaluatorDict', 'stop_value': 260, 'n_episode': 8 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 100000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'sac', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': True, 'multi_agent': False, 'cfg_type': 'SACPolicyDict' }, 'exp_name': 'LunarLander-v2-SAC', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/LunarLander-v2-SAC) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-SAC/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-SAC/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 116.05 KB - **Last Update Date:** 2023-09-23 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** LunarLander-v2 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.9 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
jingwora/inswapper_128
jingwora
2023-09-23T03:25:28Z
0
0
null
[ "onnx", "region:us" ]
null
2023-09-23T03:20:10Z
CLONED FROM - https://huggingface.co/deepinsight/inswapper GITHUB - https://github.com/deepinsight
jmbilbao25/falcon-7b-instruct-sharded-finetuned
jmbilbao25
2023-09-23T02:33:12Z
39
1
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "custom_code", "en", "dataset:daily_dialog", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-21T09:30:01Z
--- license: apache-2.0 datasets: - daily_dialog language: - en ---
goldenrooster/multilingual-e5-large
goldenrooster
2023-09-23T02:27:29Z
12
0
sentence-transformers
[ "sentence-transformers", "pytorch", "onnx", "safetensors", "xlm-roberta", "mteb", "Sentence Transformers", "sentence-similarity", "feature-extraction", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:2212.03533", "arxiv:2108.08787", "arxiv:2104.08663", "arxiv:2210.07316", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2023-09-23T02:26:33Z
--- tags: - mteb - Sentence Transformers - sentence-similarity - feature-extraction - sentence-transformers model-index: - name: multilingual-e5-large results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 79.05970149253731 - type: ap value: 43.486574390835635 - type: f1 value: 73.32700092140148 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (de) config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.22055674518201 - type: ap value: 81.55756710830498 - type: f1 value: 69.28271787752661 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en-ext) config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 80.41979010494754 - type: ap value: 29.34879922376344 - type: f1 value: 67.62475449011278 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (ja) config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 77.8372591006424 - type: ap value: 26.557560591210738 - type: f1 value: 64.96619417368707 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 93.489875 - type: ap value: 90.98758636917603 - type: f1 value: 93.48554819717332 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.564 - type: f1 value: 46.75122173518047 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.400000000000006 - type: f1 value: 44.17195682400632 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 43.068 - type: f1 value: 42.38155696855596 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 41.89 - type: f1 value: 40.84407321682663 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (ja) config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.120000000000005 - type: f1 value: 39.522976223819114 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.832 - type: f1 value: 38.0392533394713 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 30.725 - type: map_at_10 value: 46.055 - type: map_at_100 value: 46.900999999999996 - type: map_at_1000 value: 46.911 - type: map_at_3 value: 41.548 - type: map_at_5 value: 44.297 - type: mrr_at_1 value: 31.152 - type: mrr_at_10 value: 46.231 - type: mrr_at_100 value: 47.07 - type: mrr_at_1000 value: 47.08 - type: mrr_at_3 value: 41.738 - type: mrr_at_5 value: 44.468999999999994 - type: ndcg_at_1 value: 30.725 - type: ndcg_at_10 value: 54.379999999999995 - type: ndcg_at_100 value: 58.138 - type: ndcg_at_1000 value: 58.389 - type: ndcg_at_3 value: 45.156 - type: ndcg_at_5 value: 50.123 - type: precision_at_1 value: 30.725 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.9769999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 18.54 - type: precision_at_5 value: 13.542000000000002 - type: recall_at_1 value: 30.725 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 97.653 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 55.619 - type: recall_at_5 value: 67.71000000000001 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 44.30960650674069 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 38.427074197498996 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.28270056031872 - type: mrr value: 74.38332673789738 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.05942144105269 - type: cos_sim_spearman value: 82.51212105850809 - type: euclidean_pearson value: 81.95639829909122 - type: euclidean_spearman value: 82.3717564144213 - type: manhattan_pearson value: 81.79273425468256 - type: manhattan_spearman value: 82.20066817871039 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.46764091858039 - type: f1 value: 99.37717466945023 - type: precision value: 99.33194154488518 - type: recall value: 99.46764091858039 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (fr-en) config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.29407880255337 - type: f1 value: 98.11248073959938 - type: precision value: 98.02443319392472 - type: recall value: 98.29407880255337 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (ru-en) config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 97.79009352268791 - type: f1 value: 97.5176076665512 - type: precision value: 97.38136473848286 - type: recall value: 97.79009352268791 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 99.26276987888363 - type: f1 value: 99.20133403545726 - type: precision value: 99.17500438827453 - type: recall value: 99.26276987888363 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.72727272727273 - type: f1 value: 84.67672206031433 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.34220182511161 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 33.4987096128766 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.558249999999997 - type: map_at_10 value: 34.44425000000001 - type: map_at_100 value: 35.59833333333333 - type: map_at_1000 value: 35.706916666666665 - type: map_at_3 value: 31.691749999999995 - type: map_at_5 value: 33.252916666666664 - type: mrr_at_1 value: 30.252666666666666 - type: mrr_at_10 value: 38.60675 - type: mrr_at_100 value: 39.42666666666666 - type: mrr_at_1000 value: 39.48408333333334 - type: mrr_at_3 value: 36.17441666666665 - type: mrr_at_5 value: 37.56275 - type: ndcg_at_1 value: 30.252666666666666 - type: ndcg_at_10 value: 39.683 - type: ndcg_at_100 value: 44.68541666666667 - type: ndcg_at_1000 value: 46.94316666666668 - type: ndcg_at_3 value: 34.961749999999995 - type: ndcg_at_5 value: 37.215666666666664 - type: precision_at_1 value: 30.252666666666666 - type: precision_at_10 value: 6.904166666666667 - type: precision_at_100 value: 1.0989999999999995 - type: precision_at_1000 value: 0.14733333333333334 - type: precision_at_3 value: 16.037666666666667 - type: precision_at_5 value: 11.413583333333333 - type: recall_at_1 value: 25.558249999999997 - type: recall_at_10 value: 51.13341666666666 - type: recall_at_100 value: 73.08366666666667 - type: recall_at_1000 value: 88.79483333333334 - type: recall_at_3 value: 37.989083333333326 - type: recall_at_5 value: 43.787833333333325 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 10.338 - type: map_at_10 value: 18.360000000000003 - type: map_at_100 value: 19.942 - type: map_at_1000 value: 20.134 - type: map_at_3 value: 15.174000000000001 - type: map_at_5 value: 16.830000000000002 - type: mrr_at_1 value: 23.257 - type: mrr_at_10 value: 33.768 - type: mrr_at_100 value: 34.707 - type: mrr_at_1000 value: 34.766000000000005 - type: mrr_at_3 value: 30.977 - type: mrr_at_5 value: 32.528 - type: ndcg_at_1 value: 23.257 - type: ndcg_at_10 value: 25.733 - type: ndcg_at_100 value: 32.288 - type: ndcg_at_1000 value: 35.992000000000004 - type: ndcg_at_3 value: 20.866 - type: ndcg_at_5 value: 22.612 - type: precision_at_1 value: 23.257 - type: precision_at_10 value: 8.124 - type: precision_at_100 value: 1.518 - type: precision_at_1000 value: 0.219 - type: precision_at_3 value: 15.679000000000002 - type: precision_at_5 value: 12.117 - type: recall_at_1 value: 10.338 - type: recall_at_10 value: 31.154 - type: recall_at_100 value: 54.161 - type: recall_at_1000 value: 75.21900000000001 - type: recall_at_3 value: 19.427 - type: recall_at_5 value: 24.214 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.498 - type: map_at_10 value: 19.103 - type: map_at_100 value: 27.375 - type: map_at_1000 value: 28.981 - type: map_at_3 value: 13.764999999999999 - type: map_at_5 value: 15.950000000000001 - type: mrr_at_1 value: 65.5 - type: mrr_at_10 value: 74.53800000000001 - type: mrr_at_100 value: 74.71799999999999 - type: mrr_at_1000 value: 74.725 - type: mrr_at_3 value: 72.792 - type: mrr_at_5 value: 73.554 - type: ndcg_at_1 value: 53.37499999999999 - type: ndcg_at_10 value: 41.286 - type: ndcg_at_100 value: 45.972 - type: ndcg_at_1000 value: 53.123 - type: ndcg_at_3 value: 46.172999999999995 - type: ndcg_at_5 value: 43.033 - type: precision_at_1 value: 65.5 - type: precision_at_10 value: 32.725 - type: precision_at_100 value: 10.683 - type: precision_at_1000 value: 1.978 - type: precision_at_3 value: 50 - type: precision_at_5 value: 41.349999999999994 - type: recall_at_1 value: 8.498 - type: recall_at_10 value: 25.070999999999998 - type: recall_at_100 value: 52.383 - type: recall_at_1000 value: 74.91499999999999 - type: recall_at_3 value: 15.207999999999998 - type: recall_at_5 value: 18.563 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 46.5 - type: f1 value: 41.93833713984145 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 67.914 - type: map_at_10 value: 78.10000000000001 - type: map_at_100 value: 78.333 - type: map_at_1000 value: 78.346 - type: map_at_3 value: 76.626 - type: map_at_5 value: 77.627 - type: mrr_at_1 value: 72.74199999999999 - type: mrr_at_10 value: 82.414 - type: mrr_at_100 value: 82.511 - type: mrr_at_1000 value: 82.513 - type: mrr_at_3 value: 81.231 - type: mrr_at_5 value: 82.065 - type: ndcg_at_1 value: 72.74199999999999 - type: ndcg_at_10 value: 82.806 - type: ndcg_at_100 value: 83.677 - type: ndcg_at_1000 value: 83.917 - type: ndcg_at_3 value: 80.305 - type: ndcg_at_5 value: 81.843 - type: precision_at_1 value: 72.74199999999999 - type: precision_at_10 value: 10.24 - type: precision_at_100 value: 1.089 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 31.268 - type: precision_at_5 value: 19.706000000000003 - type: recall_at_1 value: 67.914 - type: recall_at_10 value: 92.889 - type: recall_at_100 value: 96.42699999999999 - type: recall_at_1000 value: 97.92 - type: recall_at_3 value: 86.21 - type: recall_at_5 value: 90.036 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.166 - type: map_at_10 value: 35.57 - type: map_at_100 value: 37.405 - type: map_at_1000 value: 37.564 - type: map_at_3 value: 30.379 - type: map_at_5 value: 33.324 - type: mrr_at_1 value: 43.519000000000005 - type: mrr_at_10 value: 51.556000000000004 - type: mrr_at_100 value: 52.344 - type: mrr_at_1000 value: 52.373999999999995 - type: mrr_at_3 value: 48.868 - type: mrr_at_5 value: 50.319 - type: ndcg_at_1 value: 43.519000000000005 - type: ndcg_at_10 value: 43.803 - type: ndcg_at_100 value: 50.468999999999994 - type: ndcg_at_1000 value: 53.111 - type: ndcg_at_3 value: 38.893 - type: ndcg_at_5 value: 40.653 - type: precision_at_1 value: 43.519000000000005 - type: precision_at_10 value: 12.253 - type: precision_at_100 value: 1.931 - type: precision_at_1000 value: 0.242 - type: precision_at_3 value: 25.617 - type: precision_at_5 value: 19.383 - type: recall_at_1 value: 22.166 - type: recall_at_10 value: 51.6 - type: recall_at_100 value: 76.574 - type: recall_at_1000 value: 92.192 - type: recall_at_3 value: 34.477999999999994 - type: recall_at_5 value: 41.835 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.041 - type: map_at_10 value: 62.961999999999996 - type: map_at_100 value: 63.79899999999999 - type: map_at_1000 value: 63.854 - type: map_at_3 value: 59.399 - type: map_at_5 value: 61.669 - type: mrr_at_1 value: 78.082 - type: mrr_at_10 value: 84.321 - type: mrr_at_100 value: 84.49600000000001 - type: mrr_at_1000 value: 84.502 - type: mrr_at_3 value: 83.421 - type: mrr_at_5 value: 83.977 - type: ndcg_at_1 value: 78.082 - type: ndcg_at_10 value: 71.229 - type: ndcg_at_100 value: 74.10900000000001 - type: ndcg_at_1000 value: 75.169 - type: ndcg_at_3 value: 66.28699999999999 - type: ndcg_at_5 value: 69.084 - type: precision_at_1 value: 78.082 - type: precision_at_10 value: 14.993 - type: precision_at_100 value: 1.7239999999999998 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 42.737 - type: precision_at_5 value: 27.843 - type: recall_at_1 value: 39.041 - type: recall_at_10 value: 74.96300000000001 - type: recall_at_100 value: 86.199 - type: recall_at_1000 value: 93.228 - type: recall_at_3 value: 64.105 - type: recall_at_5 value: 69.608 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.23160000000001 - type: ap value: 85.5674856808308 - type: f1 value: 90.18033354786317 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 24.091 - type: map_at_10 value: 36.753 - type: map_at_100 value: 37.913000000000004 - type: map_at_1000 value: 37.958999999999996 - type: map_at_3 value: 32.818999999999996 - type: map_at_5 value: 35.171 - type: mrr_at_1 value: 24.742 - type: mrr_at_10 value: 37.285000000000004 - type: mrr_at_100 value: 38.391999999999996 - type: mrr_at_1000 value: 38.431 - type: mrr_at_3 value: 33.440999999999995 - type: mrr_at_5 value: 35.75 - type: ndcg_at_1 value: 24.742 - type: ndcg_at_10 value: 43.698 - type: ndcg_at_100 value: 49.145 - type: ndcg_at_1000 value: 50.23800000000001 - type: ndcg_at_3 value: 35.769 - type: ndcg_at_5 value: 39.961999999999996 - type: precision_at_1 value: 24.742 - type: precision_at_10 value: 6.7989999999999995 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.096000000000002 - type: precision_at_5 value: 11.183 - type: recall_at_1 value: 24.091 - type: recall_at_10 value: 65.068 - type: recall_at_100 value: 89.899 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 43.68 - type: recall_at_5 value: 53.754999999999995 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.66621067031465 - type: f1 value: 93.49622853272142 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.94702733164272 - type: f1 value: 91.17043441745282 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 92.20146764509674 - type: f1 value: 91.98359080555608 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.99780770435328 - type: f1 value: 89.19746342724068 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (hi) config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.78486912871998 - type: f1 value: 89.24578823628642 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (th) config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 88.74502712477394 - type: f1 value: 89.00297573881542 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.9046967624259 - type: f1 value: 59.36787125785957 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.5280360664976 - type: f1 value: 57.17723440888718 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 75.44029352901934 - type: f1 value: 54.052855531072964 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 70.5606013153774 - type: f1 value: 52.62215934386531 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (hi) config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 73.11581211903908 - type: f1 value: 52.341291845645465 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (th) config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.28933092224233 - type: f1 value: 57.07918745504911 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (af) config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.38063214525892 - type: f1 value: 59.46463723443009 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (am) config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.06926698049766 - type: f1 value: 52.49084283283562 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ar) config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.74983187626093 - type: f1 value: 56.960640620165904 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (az) config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.86550100874243 - type: f1 value: 62.47370548140688 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (bn) config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.971082716879636 - type: f1 value: 61.03812421957381 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (cy) config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98318762609282 - type: f1 value: 51.51207916008392 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (da) config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.45527908540686 - type: f1 value: 66.16631905400318 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.32750504371216 - type: f1 value: 66.16755288646591 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (el) config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.09213180901143 - type: f1 value: 66.95654394661507 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.75588433086752 - type: f1 value: 71.79973779656923 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.49428379287154 - type: f1 value: 68.37494379215734 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fa) config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.90921318090115 - type: f1 value: 66.79517376481645 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fi) config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.12104909213181 - type: f1 value: 67.29448842879584 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.34095494283793 - type: f1 value: 67.01134288992947 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (he) config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.61264290517822 - type: f1 value: 64.68730512660757 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hi) config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.79757901815738 - type: f1 value: 65.24938539425598 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hu) config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.68728984532616 - type: f1 value: 67.0487169762553 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hy) config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.07464694014795 - type: f1 value: 59.183532276789286 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (id) config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.04707464694015 - type: f1 value: 67.66829629003848 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (is) config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.42434431741762 - type: f1 value: 59.01617226544757 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.53127101546738 - type: f1 value: 68.10033760906255 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ja) config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.50504371217215 - type: f1 value: 69.74931103158923 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (jv) config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.91190316072628 - type: f1 value: 54.05551136648796 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ka) config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.78211163416275 - type: f1 value: 49.874888544058535 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (km) config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 47.017484868863484 - type: f1 value: 44.53364263352014 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (kn) config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.16207128446537 - type: f1 value: 59.01185692320829 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ko) config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.42501681237391 - type: f1 value: 67.13169450166086 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (lv) config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0780094149294 - type: f1 value: 64.41720167850707 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ml) config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.57162071284466 - type: f1 value: 62.414138683804424 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (mn) config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.71149966375252 - type: f1 value: 58.594805125087234 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ms) config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.03900470746471 - type: f1 value: 63.87937257883887 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (my) config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.8776059179556 - type: f1 value: 57.48587618059131 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nb) config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87895090786819 - type: f1 value: 66.8141299430347 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nl) config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.45057162071285 - type: f1 value: 67.46444039673516 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pl) config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.546738399462 - type: f1 value: 68.63640876702655 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pt) config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.72965702757229 - type: f1 value: 68.54119560379115 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ro) config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.35574983187625 - type: f1 value: 65.88844917691927 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ru) config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.70477471418964 - type: f1 value: 69.19665697061978 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sl) config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 67.0880968392737 - type: f1 value: 64.76962317666086 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sq) config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.18493611297916 - type: f1 value: 62.49984559035371 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sv) config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.75857431069265 - type: f1 value: 69.20053687623418 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sw) config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.500336247478145 - type: f1 value: 55.2972398687929 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ta) config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.68997982515132 - type: f1 value: 59.36848202755348 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (te) config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.01950235373235 - type: f1 value: 60.09351954625423 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (th) config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.29186281102892 - type: f1 value: 67.57860496703447 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tl) config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.77471418964357 - type: f1 value: 61.913983147713836 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tr) config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.87222595830532 - type: f1 value: 66.03679033708141 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ur) config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.04505716207127 - type: f1 value: 61.28569169817908 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (vi) config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 69.38466711499663 - type: f1 value: 67.20532357036844 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 71.12306657700067 - type: f1 value: 68.91251226588182 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-TW) config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.20040349697378 - type: f1 value: 66.02657347714175 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (af) config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.73907195696032 - type: f1 value: 66.98484521791418 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (am) config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.58843308675185 - type: f1 value: 58.95591723092005 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ar) config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.22730329522528 - type: f1 value: 66.0894499712115 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (az) config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.48285137861465 - type: f1 value: 65.21963176785157 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (bn) config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.74714189643578 - type: f1 value: 66.8212192745412 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (cy) config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.09213180901143 - type: f1 value: 56.70735546356339 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (da) config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.05716207128448 - type: f1 value: 74.8413712365364 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.69737726967047 - type: f1 value: 74.7664341963 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (el) config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.90383322125084 - type: f1 value: 73.59201554448323 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.51176866173503 - type: f1 value: 77.46104434577758 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.31069266980496 - type: f1 value: 74.61048660675635 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fa) config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.95225285810356 - type: f1 value: 72.33160006574627 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fi) config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.12373907195696 - type: f1 value: 73.20921012557481 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.86684599865501 - type: f1 value: 73.82348774610831 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (he) config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.40215198386012 - type: f1 value: 71.11945183971858 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hi) config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 72.12844653665098 - type: f1 value: 71.34450495911766 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hu) config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.52252858103566 - type: f1 value: 73.98878711342999 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hy) config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.93611297915265 - type: f1 value: 63.723200467653385 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (id) config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.11903160726295 - type: f1 value: 73.82138439467096 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (is) config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.15198386012105 - type: f1 value: 66.02172193802167 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.32414256893072 - type: f1 value: 74.30943421170574 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ja) config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.46805648957633 - type: f1 value: 77.62808409298209 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (jv) config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.318762609280434 - type: f1 value: 62.094284066075076 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ka) config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.34902488231338 - type: f1 value: 57.12893860987984 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (km) config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.88433086751849 - type: f1 value: 48.2272350802058 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (kn) config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.4425016812374 - type: f1 value: 64.61463095996173 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ko) config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.04707464694015 - type: f1 value: 75.05099199098998 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (lv) config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.50437121721586 - type: f1 value: 69.83397721096314 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ml) config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.94283792871553 - type: f1 value: 68.8704663703913 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (mn) config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.79488903833222 - type: f1 value: 63.615424063345436 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ms) config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.88231338264963 - type: f1 value: 68.57892302593237 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (my) config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.248150638870214 - type: f1 value: 61.06680605338809 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nb) config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.84196368527236 - type: f1 value: 74.52566464968763 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nl) config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.8285137861466 - type: f1 value: 74.8853197608802 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pl) config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.13248150638869 - type: f1 value: 74.3982040999179 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pt) config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.49024882313383 - type: f1 value: 73.82153848368573 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ro) config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.72158708809684 - type: f1 value: 71.85049433180541 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ru) config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 75.137861466039 - type: f1 value: 75.37628348188467 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sl) config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.86953597848016 - type: f1 value: 71.87537624521661 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sq) config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.27572293207801 - type: f1 value: 68.80017302344231 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sv) config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.09952925353059 - type: f1 value: 76.07992707688408 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sw) config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.140551445864155 - type: f1 value: 61.73855010331415 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ta) config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.27774041694687 - type: f1 value: 64.83664868894539 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (te) config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.69468728984533 - type: f1 value: 64.76239666920868 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (th) config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.44653665097512 - type: f1 value: 73.14646052013873 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tl) config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.71351714862139 - type: f1 value: 66.67212180163382 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tr) config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.9946200403497 - type: f1 value: 73.87348793725525 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ur) config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.15400134498992 - type: f1 value: 67.09433241421094 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (vi) config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.11365164761264 - type: f1 value: 73.59502539433753 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.82582380632145 - type: f1 value: 76.89992945316313 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-TW) config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 71.81237390719569 - type: f1 value: 72.36499770986265 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 31.480506569594695 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 29.71252128004552 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.421396787056548 - type: mrr value: 32.48155274872267 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.595 - type: map_at_10 value: 12.642000000000001 - type: map_at_100 value: 15.726 - type: map_at_1000 value: 17.061999999999998 - type: map_at_3 value: 9.125 - type: map_at_5 value: 10.866000000000001 - type: mrr_at_1 value: 43.344 - type: mrr_at_10 value: 52.227999999999994 - type: mrr_at_100 value: 52.898999999999994 - type: mrr_at_1000 value: 52.944 - type: mrr_at_3 value: 49.845 - type: mrr_at_5 value: 51.115 - type: ndcg_at_1 value: 41.949999999999996 - type: ndcg_at_10 value: 33.995 - type: ndcg_at_100 value: 30.869999999999997 - type: ndcg_at_1000 value: 39.487 - type: ndcg_at_3 value: 38.903999999999996 - type: ndcg_at_5 value: 37.236999999999995 - type: precision_at_1 value: 43.344 - type: precision_at_10 value: 25.480000000000004 - type: precision_at_100 value: 7.672 - type: precision_at_1000 value: 2.028 - type: precision_at_3 value: 36.636 - type: precision_at_5 value: 32.632 - type: recall_at_1 value: 5.595 - type: recall_at_10 value: 16.466 - type: recall_at_100 value: 31.226 - type: recall_at_1000 value: 62.778999999999996 - type: recall_at_3 value: 9.931 - type: recall_at_5 value: 12.884 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 40.414 - type: map_at_10 value: 56.754000000000005 - type: map_at_100 value: 57.457 - type: map_at_1000 value: 57.477999999999994 - type: map_at_3 value: 52.873999999999995 - type: map_at_5 value: 55.175 - type: mrr_at_1 value: 45.278 - type: mrr_at_10 value: 59.192 - type: mrr_at_100 value: 59.650000000000006 - type: mrr_at_1000 value: 59.665 - type: mrr_at_3 value: 56.141 - type: mrr_at_5 value: 57.998000000000005 - type: ndcg_at_1 value: 45.278 - type: ndcg_at_10 value: 64.056 - type: ndcg_at_100 value: 66.89 - type: ndcg_at_1000 value: 67.364 - type: ndcg_at_3 value: 56.97 - type: ndcg_at_5 value: 60.719 - type: precision_at_1 value: 45.278 - type: precision_at_10 value: 9.994 - type: precision_at_100 value: 1.165 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 25.512 - type: precision_at_5 value: 17.509 - type: recall_at_1 value: 40.414 - type: recall_at_10 value: 83.596 - type: recall_at_100 value: 95.72 - type: recall_at_1000 value: 99.24 - type: recall_at_3 value: 65.472 - type: recall_at_5 value: 74.039 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.352 - type: map_at_10 value: 84.369 - type: map_at_100 value: 85.02499999999999 - type: map_at_1000 value: 85.04 - type: map_at_3 value: 81.42399999999999 - type: map_at_5 value: 83.279 - type: mrr_at_1 value: 81.05 - type: mrr_at_10 value: 87.401 - type: mrr_at_100 value: 87.504 - type: mrr_at_1000 value: 87.505 - type: mrr_at_3 value: 86.443 - type: mrr_at_5 value: 87.10799999999999 - type: ndcg_at_1 value: 81.04 - type: ndcg_at_10 value: 88.181 - type: ndcg_at_100 value: 89.411 - type: ndcg_at_1000 value: 89.507 - type: ndcg_at_3 value: 85.28099999999999 - type: ndcg_at_5 value: 86.888 - type: precision_at_1 value: 81.04 - type: precision_at_10 value: 13.406 - type: precision_at_100 value: 1.5350000000000001 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.31 - type: precision_at_5 value: 24.54 - type: recall_at_1 value: 70.352 - type: recall_at_10 value: 95.358 - type: recall_at_100 value: 99.541 - type: recall_at_1000 value: 99.984 - type: recall_at_3 value: 87.111 - type: recall_at_5 value: 91.643 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 46.54068723291946 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 63.216287629895994 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.023000000000001 - type: map_at_10 value: 10.071 - type: map_at_100 value: 11.892 - type: map_at_1000 value: 12.196 - type: map_at_3 value: 7.234 - type: map_at_5 value: 8.613999999999999 - type: mrr_at_1 value: 19.900000000000002 - type: mrr_at_10 value: 30.516 - type: mrr_at_100 value: 31.656000000000002 - type: mrr_at_1000 value: 31.723000000000003 - type: mrr_at_3 value: 27.400000000000002 - type: mrr_at_5 value: 29.270000000000003 - type: ndcg_at_1 value: 19.900000000000002 - type: ndcg_at_10 value: 17.474 - type: ndcg_at_100 value: 25.020999999999997 - type: ndcg_at_1000 value: 30.728 - type: ndcg_at_3 value: 16.588 - type: ndcg_at_5 value: 14.498 - type: precision_at_1 value: 19.900000000000002 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 2.011 - type: precision_at_1000 value: 0.33899999999999997 - type: precision_at_3 value: 15.667 - type: precision_at_5 value: 12.839999999999998 - type: recall_at_1 value: 4.023000000000001 - type: recall_at_10 value: 18.497 - type: recall_at_100 value: 40.8 - type: recall_at_1000 value: 68.812 - type: recall_at_3 value: 9.508 - type: recall_at_5 value: 12.983 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.967008785134 - type: cos_sim_spearman value: 80.23142141101837 - type: euclidean_pearson value: 81.20166064704539 - type: euclidean_spearman value: 80.18961335654585 - type: manhattan_pearson value: 81.13925443187625 - type: manhattan_spearman value: 80.07948723044424 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.94262461316023 - type: cos_sim_spearman value: 80.01596278563865 - type: euclidean_pearson value: 83.80799622922581 - type: euclidean_spearman value: 79.94984954947103 - type: manhattan_pearson value: 83.68473841756281 - type: manhattan_spearman value: 79.84990707951822 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 80.57346443146068 - type: cos_sim_spearman value: 81.54689837570866 - type: euclidean_pearson value: 81.10909881516007 - type: euclidean_spearman value: 81.56746243261762 - type: manhattan_pearson value: 80.87076036186582 - type: manhattan_spearman value: 81.33074987964402 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 79.54733787179849 - type: cos_sim_spearman value: 77.72202105610411 - type: euclidean_pearson value: 78.9043595478849 - type: euclidean_spearman value: 77.93422804309435 - type: manhattan_pearson value: 78.58115121621368 - type: manhattan_spearman value: 77.62508135122033 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.59880017237558 - type: cos_sim_spearman value: 89.31088630824758 - type: euclidean_pearson value: 88.47069261564656 - type: euclidean_spearman value: 89.33581971465233 - type: manhattan_pearson value: 88.40774264100956 - type: manhattan_spearman value: 89.28657485627835 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.08055117917084 - type: cos_sim_spearman value: 85.78491813080304 - type: euclidean_pearson value: 84.99329155500392 - type: euclidean_spearman value: 85.76728064677287 - type: manhattan_pearson value: 84.87947428989587 - type: manhattan_spearman value: 85.62429454917464 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ko-ko) config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 82.14190939287384 - type: cos_sim_spearman value: 82.27331573306041 - type: euclidean_pearson value: 81.891896953716 - type: euclidean_spearman value: 82.37695542955998 - type: manhattan_pearson value: 81.73123869460504 - type: manhattan_spearman value: 82.19989168441421 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ar-ar) config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.84695301843362 - type: cos_sim_spearman value: 77.87790986014461 - type: euclidean_pearson value: 76.91981583106315 - type: euclidean_spearman value: 77.88154772749589 - type: manhattan_pearson value: 76.94953277451093 - type: manhattan_spearman value: 77.80499230728604 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-ar) config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.44657840482016 - type: cos_sim_spearman value: 75.05531095119674 - type: euclidean_pearson value: 75.88161755829299 - type: euclidean_spearman value: 74.73176238219332 - type: manhattan_pearson value: 75.63984765635362 - type: manhattan_spearman value: 74.86476440770737 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-de) config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.64700140524133 - type: cos_sim_spearman value: 86.16014210425672 - type: euclidean_pearson value: 86.49086860843221 - type: euclidean_spearman value: 86.09729326815614 - type: manhattan_pearson value: 86.43406265125513 - type: manhattan_spearman value: 86.17740150939994 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.91170098764921 - type: cos_sim_spearman value: 88.12437004058931 - type: euclidean_pearson value: 88.81828254494437 - type: euclidean_spearman value: 88.14831794572122 - type: manhattan_pearson value: 88.93442183448961 - type: manhattan_spearman value: 88.15254630778304 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-tr) config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.91390577997292 - type: cos_sim_spearman value: 71.22979457536074 - type: euclidean_pearson value: 74.40314008106749 - type: euclidean_spearman value: 72.54972136083246 - type: manhattan_pearson value: 73.85687539530218 - type: manhattan_spearman value: 72.09500771742637 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-en) config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 80.9301067983089 - type: cos_sim_spearman value: 80.74989828346473 - type: euclidean_pearson value: 81.36781301814257 - type: euclidean_spearman value: 80.9448819964426 - type: manhattan_pearson value: 81.0351322685609 - type: manhattan_spearman value: 80.70192121844177 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-es) config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.13820465980005 - type: cos_sim_spearman value: 86.73532498758757 - type: euclidean_pearson value: 87.21329451846637 - type: euclidean_spearman value: 86.57863198601002 - type: manhattan_pearson value: 87.06973713818554 - type: manhattan_spearman value: 86.47534918791499 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (fr-en) config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.48720108904415 - type: cos_sim_spearman value: 85.62221757068387 - type: euclidean_pearson value: 86.1010129512749 - type: euclidean_spearman value: 85.86580966509942 - type: manhattan_pearson value: 86.26800938808971 - type: manhattan_spearman value: 85.88902721678429 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (it-en) config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 83.98021347333516 - type: cos_sim_spearman value: 84.53806553803501 - type: euclidean_pearson value: 84.61483347248364 - type: euclidean_spearman value: 85.14191408011702 - type: manhattan_pearson value: 84.75297588825967 - type: manhattan_spearman value: 85.33176753669242 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (nl-en) config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 84.51856644893233 - type: cos_sim_spearman value: 85.27510748506413 - type: euclidean_pearson value: 85.09886861540977 - type: euclidean_spearman value: 85.62579245860887 - type: manhattan_pearson value: 84.93017860464607 - type: manhattan_spearman value: 85.5063988898453 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.581573200584195 - type: cos_sim_spearman value: 63.05503590247928 - type: euclidean_pearson value: 63.652564812602094 - type: euclidean_spearman value: 62.64811520876156 - type: manhattan_pearson value: 63.506842893061076 - type: manhattan_spearman value: 62.51289573046917 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de) config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.2248801729127 - type: cos_sim_spearman value: 56.5936604678561 - type: euclidean_pearson value: 43.98149464089 - type: euclidean_spearman value: 56.108561882423615 - type: manhattan_pearson value: 43.86880305903564 - type: manhattan_spearman value: 56.04671150510166 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es) config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.17564527009831 - type: cos_sim_spearman value: 64.57978560979488 - type: euclidean_pearson value: 58.8818330154583 - type: euclidean_spearman value: 64.99214839071281 - type: manhattan_pearson value: 58.72671436121381 - type: manhattan_spearman value: 65.10713416616109 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl) config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 26.772131864023297 - type: cos_sim_spearman value: 34.68200792408681 - type: euclidean_pearson value: 16.68082419005441 - type: euclidean_spearman value: 34.83099932652166 - type: manhattan_pearson value: 16.52605949659529 - type: manhattan_spearman value: 34.82075801399475 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (tr) config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.42415189043831 - type: cos_sim_spearman value: 63.54594264576758 - type: euclidean_pearson value: 57.36577498297745 - type: euclidean_spearman value: 63.111466379158074 - type: manhattan_pearson value: 57.584543715873885 - type: manhattan_spearman value: 63.22361054139183 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ar) config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 47.55216762405518 - type: cos_sim_spearman value: 56.98670142896412 - type: euclidean_pearson value: 50.15318757562699 - type: euclidean_spearman value: 56.524941926541906 - type: manhattan_pearson value: 49.955618528674904 - type: manhattan_spearman value: 56.37102209240117 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ru) config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 49.20540980338571 - type: cos_sim_spearman value: 59.9009453504406 - type: euclidean_pearson value: 49.557749853620535 - type: euclidean_spearman value: 59.76631621172456 - type: manhattan_pearson value: 49.62340591181147 - type: manhattan_spearman value: 59.94224880322436 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.508169956576985 - type: cos_sim_spearman value: 66.82461565306046 - type: euclidean_pearson value: 56.2274426480083 - type: euclidean_spearman value: 66.6775323848333 - type: manhattan_pearson value: 55.98277796300661 - type: manhattan_spearman value: 66.63669848497175 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.86478788045507 - type: cos_sim_spearman value: 76.7946552053193 - type: euclidean_pearson value: 75.01598530490269 - type: euclidean_spearman value: 76.83618917858281 - type: manhattan_pearson value: 74.68337628304332 - type: manhattan_spearman value: 76.57480204017773 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-en) config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.922619099401984 - type: cos_sim_spearman value: 56.599362477240774 - type: euclidean_pearson value: 56.68307052369783 - type: euclidean_spearman value: 54.28760436777401 - type: manhattan_pearson value: 56.67763566500681 - type: manhattan_spearman value: 53.94619541711359 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-en) config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 66.74357206710913 - type: cos_sim_spearman value: 72.5208244925311 - type: euclidean_pearson value: 67.49254562186032 - type: euclidean_spearman value: 72.02469076238683 - type: manhattan_pearson value: 67.45251772238085 - type: manhattan_spearman value: 72.05538819984538 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 71.25734330033191 - type: cos_sim_spearman value: 76.98349083946823 - type: euclidean_pearson value: 73.71642838667736 - type: euclidean_spearman value: 77.01715504651384 - type: manhattan_pearson value: 73.61712711868105 - type: manhattan_spearman value: 77.01392571153896 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl-en) config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.18215462781212 - type: cos_sim_spearman value: 65.54373266117607 - type: euclidean_pearson value: 64.54126095439005 - type: euclidean_spearman value: 65.30410369102711 - type: manhattan_pearson value: 63.50332221148234 - type: manhattan_spearman value: 64.3455878104313 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh-en) config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 62.30509221440029 - type: cos_sim_spearman value: 65.99582704642478 - type: euclidean_pearson value: 63.43818859884195 - type: euclidean_spearman value: 66.83172582815764 - type: manhattan_pearson value: 63.055779168508764 - type: manhattan_spearman value: 65.49585020501449 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-it) config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.587830825340404 - type: cos_sim_spearman value: 68.93467614588089 - type: euclidean_pearson value: 62.3073527367404 - type: euclidean_spearman value: 69.69758171553175 - type: manhattan_pearson value: 61.9074580815789 - type: manhattan_spearman value: 69.57696375597865 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-fr) config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.143220125577066 - type: cos_sim_spearman value: 67.78857859159226 - type: euclidean_pearson value: 55.58225107923733 - type: euclidean_spearman value: 67.80662907184563 - type: manhattan_pearson value: 56.24953502726514 - type: manhattan_spearman value: 67.98262125431616 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-pl) config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 21.826928900322066 - type: cos_sim_spearman value: 49.578506634400405 - type: euclidean_pearson value: 27.939890138843214 - type: euclidean_spearman value: 52.71950519136242 - type: manhattan_pearson value: 26.39878683847546 - type: manhattan_spearman value: 47.54609580342499 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr-pl) config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 57.27603854632001 - type: cos_sim_spearman value: 50.709255283710995 - type: euclidean_pearson value: 59.5419024445929 - type: euclidean_spearman value: 50.709255283710995 - type: manhattan_pearson value: 59.03256832438492 - type: manhattan_spearman value: 61.97797868009122 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 85.00757054859712 - type: cos_sim_spearman value: 87.29283629622222 - type: euclidean_pearson value: 86.54824171775536 - type: euclidean_spearman value: 87.24364730491402 - type: manhattan_pearson value: 86.5062156915074 - type: manhattan_spearman value: 87.15052170378574 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 82.03549357197389 - type: mrr value: 95.05437645143527 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 57.260999999999996 - type: map_at_10 value: 66.259 - type: map_at_100 value: 66.884 - type: map_at_1000 value: 66.912 - type: map_at_3 value: 63.685 - type: map_at_5 value: 65.35499999999999 - type: mrr_at_1 value: 60.333000000000006 - type: mrr_at_10 value: 67.5 - type: mrr_at_100 value: 68.013 - type: mrr_at_1000 value: 68.038 - type: mrr_at_3 value: 65.61099999999999 - type: mrr_at_5 value: 66.861 - type: ndcg_at_1 value: 60.333000000000006 - type: ndcg_at_10 value: 70.41 - type: ndcg_at_100 value: 73.10600000000001 - type: ndcg_at_1000 value: 73.846 - type: ndcg_at_3 value: 66.133 - type: ndcg_at_5 value: 68.499 - type: precision_at_1 value: 60.333000000000006 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.0630000000000002 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.067 - type: recall_at_1 value: 57.260999999999996 - type: recall_at_10 value: 81.94399999999999 - type: recall_at_100 value: 93.867 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.339 - type: recall_at_5 value: 76.25 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.74356435643564 - type: cos_sim_ap value: 93.13411948212683 - type: cos_sim_f1 value: 86.80521991300147 - type: cos_sim_precision value: 84.00374181478017 - type: cos_sim_recall value: 89.8 - type: dot_accuracy value: 99.67920792079208 - type: dot_ap value: 89.27277565444479 - type: dot_f1 value: 83.9276990718124 - type: dot_precision value: 82.04393505253104 - type: dot_recall value: 85.9 - type: euclidean_accuracy value: 99.74257425742574 - type: euclidean_ap value: 93.17993008259062 - type: euclidean_f1 value: 86.69396110542476 - type: euclidean_precision value: 88.78406708595388 - type: euclidean_recall value: 84.7 - type: manhattan_accuracy value: 99.74257425742574 - type: manhattan_ap value: 93.14413755550099 - type: manhattan_f1 value: 86.82483594144371 - type: manhattan_precision value: 87.66564729867483 - type: manhattan_recall value: 86 - type: max_accuracy value: 99.74356435643564 - type: max_ap value: 93.17993008259062 - type: max_f1 value: 86.82483594144371 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 57.525863806168566 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.68850574423839 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.71580650644033 - type: mrr value: 50.50971903913081 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.152190498799484 - type: cos_sim_spearman value: 29.686180371952727 - type: dot_pearson value: 27.248664793816342 - type: dot_spearman value: 28.37748983721745 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.20400000000000001 - type: map_at_10 value: 1.6209999999999998 - type: map_at_100 value: 9.690999999999999 - type: map_at_1000 value: 23.733 - type: map_at_3 value: 0.575 - type: map_at_5 value: 0.885 - type: mrr_at_1 value: 78 - type: mrr_at_10 value: 86.56700000000001 - type: mrr_at_100 value: 86.56700000000001 - type: mrr_at_1000 value: 86.56700000000001 - type: mrr_at_3 value: 85.667 - type: mrr_at_5 value: 86.56700000000001 - type: ndcg_at_1 value: 76 - type: ndcg_at_10 value: 71.326 - type: ndcg_at_100 value: 54.208999999999996 - type: ndcg_at_1000 value: 49.252 - type: ndcg_at_3 value: 74.235 - type: ndcg_at_5 value: 73.833 - type: precision_at_1 value: 78 - type: precision_at_10 value: 74.8 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.836 - type: precision_at_3 value: 78 - type: precision_at_5 value: 78 - type: recall_at_1 value: 0.20400000000000001 - type: recall_at_10 value: 1.894 - type: recall_at_100 value: 13.245999999999999 - type: recall_at_1000 value: 46.373 - type: recall_at_3 value: 0.613 - type: recall_at_5 value: 0.991 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (sqi-eng) config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.89999999999999 - type: f1 value: 94.69999999999999 - type: precision value: 94.11666666666667 - type: recall value: 95.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fry-eng) config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.20809248554913 - type: f1 value: 63.431048720066066 - type: precision value: 61.69143958161298 - type: recall value: 68.20809248554913 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kur-eng) config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 71.21951219512195 - type: f1 value: 66.82926829268293 - type: precision value: 65.1260162601626 - type: recall value: 71.21951219512195 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tur-eng) config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.2 - type: f1 value: 96.26666666666667 - type: precision value: 95.8 - type: recall value: 97.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (deu-eng) config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 99.3 - type: f1 value: 99.06666666666666 - type: precision value: 98.95 - type: recall value: 99.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nld-eng) config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.63333333333333 - type: precision value: 96.26666666666668 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ron-eng) config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.86666666666666 - type: precision value: 94.31666666666668 - type: recall value: 96 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ang-eng) config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.01492537313433 - type: f1 value: 40.178867566927266 - type: precision value: 38.179295828549556 - type: recall value: 47.01492537313433 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ido-eng) config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.5 - type: f1 value: 83.62537480063796 - type: precision value: 82.44555555555554 - type: recall value: 86.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jav-eng) config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.48780487804879 - type: f1 value: 75.45644599303138 - type: precision value: 73.37398373983739 - type: recall value: 80.48780487804879 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (isl-eng) config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.95666666666666 - type: precision value: 91.125 - type: recall value: 93.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slv-eng) config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.73754556500607 - type: f1 value: 89.65168084244632 - type: precision value: 88.73025516403402 - type: recall value: 91.73754556500607 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cym-eng) config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81.04347826086956 - type: f1 value: 76.2128364389234 - type: precision value: 74.2 - type: recall value: 81.04347826086956 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kaz-eng) config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.65217391304348 - type: f1 value: 79.4376811594203 - type: precision value: 77.65797101449274 - type: recall value: 83.65217391304348 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (est-eng) config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 85.02690476190476 - type: precision value: 83.96261904761904 - type: recall value: 87.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (heb-eng) config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.3 - type: f1 value: 86.52333333333333 - type: precision value: 85.22833333333332 - type: recall value: 89.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gla-eng) config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.01809408926418 - type: f1 value: 59.00594446432805 - type: precision value: 56.827215807915444 - type: recall value: 65.01809408926418 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mar-eng) config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.2 - type: f1 value: 88.58 - type: precision value: 87.33333333333334 - type: recall value: 91.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lat-eng) config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.199999999999996 - type: f1 value: 53.299166276284915 - type: precision value: 51.3383908045977 - type: recall value: 59.199999999999996 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bel-eng) config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.2 - type: precision value: 90.25 - type: recall value: 93.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pms-eng) config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 64.76190476190476 - type: f1 value: 59.867110667110666 - type: precision value: 58.07390192653351 - type: recall value: 64.76190476190476 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gle-eng) config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.2 - type: f1 value: 71.48147546897547 - type: precision value: 69.65409090909091 - type: recall value: 76.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pes-eng) config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.8 - type: f1 value: 92.14 - type: precision value: 91.35833333333333 - type: recall value: 93.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nob-eng) config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.89999999999999 - type: f1 value: 97.2 - type: precision value: 96.85000000000001 - type: recall value: 97.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bul-eng) config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 92.93333333333334 - type: precision value: 92.13333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cbk-eng) config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.1 - type: f1 value: 69.14817460317461 - type: precision value: 67.2515873015873 - type: recall value: 74.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hun-eng) config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.19999999999999 - type: f1 value: 94.01333333333335 - type: precision value: 93.46666666666667 - type: recall value: 95.19999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uig-eng) config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.9 - type: f1 value: 72.07523809523809 - type: precision value: 70.19777777777779 - type: recall value: 76.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (rus-eng) config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.1 - type: f1 value: 92.31666666666666 - type: precision value: 91.43333333333332 - type: recall value: 94.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (spa-eng) config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.8 - type: f1 value: 97.1 - type: precision value: 96.76666666666668 - type: recall value: 97.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hye-eng) config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.85714285714286 - type: f1 value: 90.92093441150045 - type: precision value: 90.00449236298293 - type: recall value: 92.85714285714286 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tel-eng) config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.16239316239316 - type: f1 value: 91.33903133903132 - type: precision value: 90.56267806267806 - type: recall value: 93.16239316239316 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (afr-eng) config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.25666666666666 - type: precision value: 89.25833333333334 - type: recall value: 92.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mon-eng) config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.22727272727272 - type: f1 value: 87.53030303030303 - type: precision value: 86.37121212121211 - type: recall value: 90.22727272727272 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arz-eng) config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.03563941299791 - type: f1 value: 74.7349505840072 - type: precision value: 72.9035639412998 - type: recall value: 79.03563941299791 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hrv-eng) config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97 - type: f1 value: 96.15 - type: precision value: 95.76666666666668 - type: recall value: 97 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nov-eng) config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.26459143968872 - type: f1 value: 71.55642023346303 - type: precision value: 69.7544932369835 - type: recall value: 76.26459143968872 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gsw-eng) config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.119658119658126 - type: f1 value: 51.65242165242165 - type: precision value: 49.41768108434775 - type: recall value: 58.119658119658126 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nds-eng) config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.52055555555555 - type: precision value: 67.7574938949939 - type: recall value: 74.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ukr-eng) config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.8 - type: f1 value: 93.31666666666666 - type: precision value: 92.60000000000001 - type: recall value: 94.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uzb-eng) config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.63551401869158 - type: f1 value: 72.35202492211837 - type: precision value: 70.60358255451713 - type: recall value: 76.63551401869158 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lit-eng) config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 88.4811111111111 - type: precision value: 87.7452380952381 - type: recall value: 90.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ina-eng) config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95 - type: f1 value: 93.60666666666667 - type: precision value: 92.975 - type: recall value: 95 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lfn-eng) config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 63.01595782872099 - type: precision value: 61.596587301587306 - type: recall value: 67.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (zsm-eng) config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.7 - type: f1 value: 94.52999999999999 - type: precision value: 94 - type: recall value: 95.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ita-eng) config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.28999999999999 - type: precision value: 92.675 - type: recall value: 94.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cmn-eng) config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.75 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lvs-eng) config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.9 - type: f1 value: 89.83 - type: precision value: 88.92 - type: recall value: 91.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (glg-eng) config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.34222222222223 - type: precision value: 92.75416666666668 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ceb-eng) config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.333333333333336 - type: f1 value: 55.31203703703703 - type: precision value: 53.39971108326371 - type: recall value: 60.333333333333336 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bre-eng) config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 12.9 - type: f1 value: 11.099861903031458 - type: precision value: 10.589187932631877 - type: recall value: 12.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ben-eng) config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.7 - type: f1 value: 83.0152380952381 - type: precision value: 81.37833333333333 - type: recall value: 86.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swg-eng) config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.39285714285714 - type: f1 value: 56.832482993197274 - type: precision value: 54.56845238095237 - type: recall value: 63.39285714285714 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arq-eng) config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.73765093304062 - type: f1 value: 41.555736920720456 - type: precision value: 39.06874531737319 - type: recall value: 48.73765093304062 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kab-eng) config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 41.099999999999994 - type: f1 value: 36.540165945165946 - type: precision value: 35.05175685425686 - type: recall value: 41.099999999999994 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fra-eng) config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.42333333333333 - type: precision value: 92.75833333333333 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (por-eng) config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.63333333333334 - type: precision value: 93.01666666666665 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tat-eng) config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.9 - type: f1 value: 73.64833333333334 - type: precision value: 71.90282106782105 - type: recall value: 77.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (oci-eng) config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.4 - type: f1 value: 54.90521367521367 - type: precision value: 53.432840025471606 - type: recall value: 59.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pol-eng) config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.6 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (war-eng) config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 67.2 - type: f1 value: 62.25926129426129 - type: precision value: 60.408376623376626 - type: recall value: 67.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (aze-eng) config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.60666666666667 - type: precision value: 86.45277777777778 - type: recall value: 90.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (vie-eng) config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.7 - type: f1 value: 97 - type: precision value: 96.65 - type: recall value: 97.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nno-eng) config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.2 - type: f1 value: 91.39746031746031 - type: precision value: 90.6125 - type: recall value: 93.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cha-eng) config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.11678832116788 - type: f1 value: 27.210415386260234 - type: precision value: 26.20408990846947 - type: recall value: 32.11678832116788 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mhr-eng) config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.787319277832475 - type: precision value: 6.3452094433344435 - type: recall value: 8.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dan-eng) config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.1 - type: f1 value: 95.08 - type: precision value: 94.61666666666667 - type: recall value: 96.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ell-eng) config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.3 - type: f1 value: 93.88333333333333 - type: precision value: 93.18333333333332 - type: recall value: 95.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (amh-eng) config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.11904761904762 - type: f1 value: 80.69444444444444 - type: precision value: 78.72023809523809 - type: recall value: 85.11904761904762 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pam-eng) config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 11.1 - type: f1 value: 9.276381801735853 - type: precision value: 8.798174603174601 - type: recall value: 11.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hsb-eng) config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.56107660455487 - type: f1 value: 58.70433569191332 - type: precision value: 56.896926581464015 - type: recall value: 63.56107660455487 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (srp-eng) config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.69999999999999 - type: f1 value: 93.10000000000001 - type: precision value: 92.35 - type: recall value: 94.69999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (epo-eng) config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.8 - type: f1 value: 96.01222222222222 - type: precision value: 95.67083333333332 - type: recall value: 96.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kzj-eng) config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.2 - type: f1 value: 7.911555250305249 - type: precision value: 7.631246556216846 - type: recall value: 9.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (awa-eng) config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.48917748917748 - type: f1 value: 72.27375798804371 - type: precision value: 70.14430014430013 - type: recall value: 77.48917748917748 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fao-eng) config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.09923664122137 - type: f1 value: 72.61541257724463 - type: precision value: 70.8998380754106 - type: recall value: 77.09923664122137 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mal-eng) config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 98.2532751091703 - type: f1 value: 97.69529354682193 - type: precision value: 97.42843279961184 - type: recall value: 98.2532751091703 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ile-eng) config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.8 - type: f1 value: 79.14672619047619 - type: precision value: 77.59489247311828 - type: recall value: 82.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bos-eng) config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.35028248587571 - type: f1 value: 92.86252354048965 - type: precision value: 92.2080979284369 - type: recall value: 94.35028248587571 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cor-eng) config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.5 - type: f1 value: 6.282429263935621 - type: precision value: 5.783274240739785 - type: recall value: 8.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cat-eng) config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 91.025 - type: precision value: 90.30428571428571 - type: recall value: 92.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (eus-eng) config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 81 - type: f1 value: 77.8232380952381 - type: precision value: 76.60194444444444 - type: recall value: 81 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yue-eng) config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91 - type: f1 value: 88.70857142857142 - type: precision value: 87.7 - type: recall value: 91 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swe-eng) config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.3 - type: precision value: 94.76666666666667 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dtp-eng) config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.1 - type: f1 value: 7.001008218834307 - type: precision value: 6.708329562594269 - type: recall value: 8.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kat-eng) config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.1313672922252 - type: f1 value: 84.09070598748882 - type: precision value: 82.79171454104429 - type: recall value: 87.1313672922252 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jpn-eng) config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.28333333333333 - type: precision value: 94.73333333333332 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (csb-eng) config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 42.29249011857708 - type: f1 value: 36.981018542283365 - type: precision value: 35.415877813576024 - type: recall value: 42.29249011857708 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (xho-eng) config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.80281690140845 - type: f1 value: 80.86854460093896 - type: precision value: 79.60093896713614 - type: recall value: 83.80281690140845 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (orv-eng) config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 45.26946107784431 - type: f1 value: 39.80235464678088 - type: precision value: 38.14342660001342 - type: recall value: 45.26946107784431 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ind-eng) config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.3 - type: f1 value: 92.9 - type: precision value: 92.26666666666668 - type: recall value: 94.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tuk-eng) config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.93103448275862 - type: f1 value: 33.15192743764172 - type: precision value: 31.57456528146183 - type: recall value: 37.93103448275862 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (max-eng) config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.01408450704226 - type: f1 value: 63.41549295774648 - type: precision value: 61.342778895595806 - type: recall value: 69.01408450704226 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swh-eng) config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.66666666666667 - type: f1 value: 71.60705960705961 - type: precision value: 69.60683760683762 - type: recall value: 76.66666666666667 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hin-eng) config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 95.8 - type: f1 value: 94.48333333333333 - type: precision value: 93.83333333333333 - type: recall value: 95.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dsb-eng) config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.81837160751566 - type: f1 value: 48.435977731384824 - type: precision value: 47.11291973845539 - type: recall value: 52.81837160751566 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ber-eng) config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.9 - type: f1 value: 38.88962621607783 - type: precision value: 36.95936507936508 - type: recall value: 44.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tam-eng) config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.55374592833876 - type: f1 value: 88.22553125484721 - type: precision value: 87.26927252985884 - type: recall value: 90.55374592833876 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slk-eng) config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.6 - type: f1 value: 93.13333333333333 - type: precision value: 92.45333333333333 - type: recall value: 94.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tgl-eng) config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.7 - type: f1 value: 91.99666666666667 - type: precision value: 91.26666666666668 - type: recall value: 93.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ast-eng) config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.03937007874016 - type: f1 value: 81.75853018372703 - type: precision value: 80.34120734908137 - type: recall value: 85.03937007874016 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mkd-eng) config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.3 - type: f1 value: 85.5 - type: precision value: 84.25833333333334 - type: recall value: 88.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (khm-eng) config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.51246537396122 - type: f1 value: 60.02297410192148 - type: precision value: 58.133467727289236 - type: recall value: 65.51246537396122 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ces-eng) config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96 - type: f1 value: 94.89 - type: precision value: 94.39166666666667 - type: recall value: 96 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tzl-eng) config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.692307692307686 - type: f1 value: 53.162393162393165 - type: precision value: 51.70673076923077 - type: recall value: 57.692307692307686 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (urd-eng) config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.60000000000001 - type: f1 value: 89.21190476190475 - type: precision value: 88.08666666666667 - type: recall value: 91.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ara-eng) config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88 - type: f1 value: 85.47 - type: precision value: 84.43266233766234 - type: recall value: 88 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kor-eng) config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.7 - type: f1 value: 90.64999999999999 - type: precision value: 89.68333333333332 - type: recall value: 92.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yid-eng) config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.30660377358491 - type: f1 value: 76.33044137466307 - type: precision value: 74.78970125786164 - type: recall value: 80.30660377358491 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fin-eng) config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.39999999999999 - type: f1 value: 95.44 - type: precision value: 94.99166666666666 - type: recall value: 96.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tha-eng) config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.53284671532847 - type: f1 value: 95.37712895377129 - type: precision value: 94.7992700729927 - type: recall value: 96.53284671532847 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (wuu-eng) config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89 - type: f1 value: 86.23190476190476 - type: precision value: 85.035 - type: recall value: 89 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.585 - type: map_at_10 value: 9.012 - type: map_at_100 value: 14.027000000000001 - type: map_at_1000 value: 15.565000000000001 - type: map_at_3 value: 5.032 - type: map_at_5 value: 6.657 - type: mrr_at_1 value: 28.571 - type: mrr_at_10 value: 45.377 - type: mrr_at_100 value: 46.119 - type: mrr_at_1000 value: 46.127 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 42.585 - type: ndcg_at_1 value: 27.551 - type: ndcg_at_10 value: 23.395 - type: ndcg_at_100 value: 33.342 - type: ndcg_at_1000 value: 45.523 - type: ndcg_at_3 value: 25.158 - type: ndcg_at_5 value: 23.427 - type: precision_at_1 value: 28.571 - type: precision_at_10 value: 21.429000000000002 - type: precision_at_100 value: 6.714 - type: precision_at_1000 value: 1.473 - type: precision_at_3 value: 27.211000000000002 - type: precision_at_5 value: 24.490000000000002 - type: recall_at_1 value: 2.585 - type: recall_at_10 value: 15.418999999999999 - type: recall_at_100 value: 42.485 - type: recall_at_1000 value: 79.536 - type: recall_at_3 value: 6.239999999999999 - type: recall_at_5 value: 8.996 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.3234 - type: ap value: 14.361688653847423 - type: f1 value: 54.819068624319044 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.97792869269949 - type: f1 value: 62.28965628513728 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 38.90540145385218 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.53513739047506 - type: cos_sim_ap value: 75.27741586677557 - type: cos_sim_f1 value: 69.18792902473774 - type: cos_sim_precision value: 67.94708725515136 - type: cos_sim_recall value: 70.47493403693932 - type: dot_accuracy value: 84.7052512368123 - type: dot_ap value: 69.36075482849378 - type: dot_f1 value: 64.44688376631296 - type: dot_precision value: 59.92288500793831 - type: dot_recall value: 69.70976253298153 - type: euclidean_accuracy value: 86.60666388508076 - type: euclidean_ap value: 75.47512772621097 - type: euclidean_f1 value: 69.413872536473 - type: euclidean_precision value: 67.39562624254472 - type: euclidean_recall value: 71.55672823218997 - type: manhattan_accuracy value: 86.52917684925792 - type: manhattan_ap value: 75.34000110496703 - type: manhattan_f1 value: 69.28489190226429 - type: manhattan_precision value: 67.24608889992551 - type: manhattan_recall value: 71.45118733509234 - type: max_accuracy value: 86.60666388508076 - type: max_ap value: 75.47512772621097 - type: max_f1 value: 69.413872536473 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.01695967710637 - type: cos_sim_ap value: 85.8298270742901 - type: cos_sim_f1 value: 78.46988128389272 - type: cos_sim_precision value: 74.86017897091722 - type: cos_sim_recall value: 82.44533415460425 - type: dot_accuracy value: 88.19420188613343 - type: dot_ap value: 83.82679165901324 - type: dot_f1 value: 76.55833777304208 - type: dot_precision value: 75.6884875846501 - type: dot_recall value: 77.44841392054204 - type: euclidean_accuracy value: 89.03054294252338 - type: euclidean_ap value: 85.89089555185325 - type: euclidean_f1 value: 78.62997658079624 - type: euclidean_precision value: 74.92329149232914 - type: euclidean_recall value: 82.72251308900523 - type: manhattan_accuracy value: 89.0266620095471 - type: manhattan_ap value: 85.86458997929147 - type: manhattan_f1 value: 78.50685331000291 - type: manhattan_precision value: 74.5499861534201 - type: manhattan_recall value: 82.90729904527257 - type: max_accuracy value: 89.03054294252338 - type: max_ap value: 85.89089555185325 - type: max_f1 value: 78.62997658079624 language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit --- ## Multilingual-E5-large [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022 This model has 24 layers and the embedding size is 1024. ## Usage Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] # Each input text should start with "query: " or "passage: ", even for non-English texts. # For tasks other than retrieval, you can simply use the "query: " prefix. input_texts = ['query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"] tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large') model = AutoModel.from_pretrained('intfloat/multilingual-e5-large') # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) # normalize embeddings embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:2] @ embeddings[2:].T) * 100 print(scores.tolist()) ``` ## Supported Languages This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation. ## Training Details **Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) **First stage**: contrastive pre-training with weak supervision | Dataset | Weak supervision | # of text pairs | |--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------| | Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B | | [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M | | [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B | | [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M | | Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M | | [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M | | [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M | | [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M | | [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M | **Second stage**: supervised fine-tuning | Dataset | Language | # of text pairs | |----------------------------------------------------------------------------------------|--------------|-----------------| | [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k | | [NQ](https://github.com/facebookresearch/DPR) | English | 70k | | [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k | | [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k | | [ELI5](https://huggingface.co/datasets/eli5) | English | 500k | | [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k | | [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k | | [SQuAD](https://huggingface.co/datasets/squad) | English | 87k | | [Quora](https://huggingface.co/datasets/quora) | English | 150k | | [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k | | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k | For all labeled datasets, we only use its training set for fine-tuning. For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf). ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787) | Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th | |-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- | | BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 | | mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 | | BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 | | | | | multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 | | multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 | | multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 | ## MTEB Benchmark Evaluation Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316). ## Support for Sentence Transformers Below is an example for usage with sentence_transformers. ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('intfloat/multilingual-e5-large') input_texts = [ 'query: how much protein should a female eat', 'query: 南瓜的家常做法', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅" ] embeddings = model.encode(input_texts, normalize_embeddings=True) ``` Package requirements `pip install sentence_transformers~=2.2.2` Contributors: [michaelfeil](https://huggingface.co/michaelfeil) ## FAQ **1. Do I need to add the prefix "query: " and "passage: " to input texts?** Yes, this is how the model is trained, otherwise you will see a performance degradation. Here are some rules of thumb: - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval. - Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval. - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering. **2. Why are my reproduced results slightly different from reported in the model card?** Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences. **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?** This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss. For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue. ## Citation If you find our paper or models helpful, please consider cite as follows: ``` @article{wang2022text, title={Text Embeddings by Weakly-Supervised Contrastive Pre-training}, author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu}, journal={arXiv preprint arXiv:2212.03533}, year={2022} } ``` ## Limitations Long texts will be truncated to at most 512 tokens.
alantaquito6/practicaNLP
alantaquito6
2023-09-23T02:14:29Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-23T02:10:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: practicaNLP results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: mrpc split: validation args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8235294117647058 - name: F1 type: f1 value: 0.8620689655172414 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # practicaNLP This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5822 - Accuracy: 0.8235 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5415 | 1.09 | 500 | 0.5463 | 0.8137 | 0.8681 | | 0.3854 | 2.18 | 1000 | 0.5822 | 0.8235 | 0.8621 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Juniplayground/Mist_LLaMA-2-7B-1024_V13_COT_Meta
Juniplayground
2023-09-23T02:13:01Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-23T02:13:00Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
Juniplayground/Mist_LLaMA-2-7B-1024_V12-COT_Alapaca
Juniplayground
2023-09-23T02:08:35Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-23T02:08:33Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
CyberHarem/cure_butterfly_hirogaruskyprecure
CyberHarem
2023-09-23T02:02:53Z
0
1
null
[ "art", "text-to-image", "dataset:CyberHarem/cure_butterfly_hirogaruskyprecure", "license:mit", "region:us" ]
text-to-image
2023-09-04T14:10:46Z
--- license: mit datasets: - CyberHarem/cure_butterfly_hirogaruskyprecure pipeline_tag: text-to-image tags: - art --- # Lora of cure_butterfly_hirogaruskyprecure This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 5720, you need to download `5720/cure_butterfly_hirogaruskyprecure.pt` as the embedding and `5720/cure_butterfly_hirogaruskyprecure.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 5720**, with the score of 0.982. The trigger words are: 1. `cure_butterfly_hirogaruskyprecure` 2. `blonde_hair, long_hair, purple_eyes, jewelry, earrings, magical_girl, smile, open_mouth` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6600 | 0.948 | [Download](6600/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-6600](6600/previews/pattern_1.png) | ![pattern_2-6600](6600/previews/pattern_2.png) | ![pattern_3-6600](6600/previews/pattern_3.png) | ![pattern_4-6600](6600/previews/pattern_4.png) | ![pattern_5-6600](6600/previews/pattern_5.png) | ![pattern_6-6600](6600/previews/pattern_6.png) | ![pattern_7-6600](6600/previews/pattern_7.png) | ![pattern_8-6600](6600/previews/pattern_8.png) | ![pattern_9-6600](6600/previews/pattern_9.png) | ![pattern_10-6600](6600/previews/pattern_10.png) | ![pattern_11-6600](6600/previews/pattern_11.png) | ![bikini-6600](6600/previews/bikini.png) | [<NSFW, click to see>](6600/previews/bondage.png) | ![free-6600](6600/previews/free.png) | ![maid-6600](6600/previews/maid.png) | ![miko-6600](6600/previews/miko.png) | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) | ![suit-6600](6600/previews/suit.png) | ![yukata-6600](6600/previews/yukata.png) | | 6160 | 0.969 | [Download](6160/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-6160](6160/previews/pattern_1.png) | ![pattern_2-6160](6160/previews/pattern_2.png) | ![pattern_3-6160](6160/previews/pattern_3.png) | ![pattern_4-6160](6160/previews/pattern_4.png) | ![pattern_5-6160](6160/previews/pattern_5.png) | ![pattern_6-6160](6160/previews/pattern_6.png) | ![pattern_7-6160](6160/previews/pattern_7.png) | ![pattern_8-6160](6160/previews/pattern_8.png) | ![pattern_9-6160](6160/previews/pattern_9.png) | ![pattern_10-6160](6160/previews/pattern_10.png) | ![pattern_11-6160](6160/previews/pattern_11.png) | ![bikini-6160](6160/previews/bikini.png) | [<NSFW, click to see>](6160/previews/bondage.png) | ![free-6160](6160/previews/free.png) | ![maid-6160](6160/previews/maid.png) | ![miko-6160](6160/previews/miko.png) | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) | ![suit-6160](6160/previews/suit.png) | ![yukata-6160](6160/previews/yukata.png) | | **5720** | **0.982** | [**Download**](5720/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-5720](5720/previews/pattern_1.png) | ![pattern_2-5720](5720/previews/pattern_2.png) | ![pattern_3-5720](5720/previews/pattern_3.png) | ![pattern_4-5720](5720/previews/pattern_4.png) | ![pattern_5-5720](5720/previews/pattern_5.png) | ![pattern_6-5720](5720/previews/pattern_6.png) | ![pattern_7-5720](5720/previews/pattern_7.png) | ![pattern_8-5720](5720/previews/pattern_8.png) | ![pattern_9-5720](5720/previews/pattern_9.png) | ![pattern_10-5720](5720/previews/pattern_10.png) | ![pattern_11-5720](5720/previews/pattern_11.png) | ![bikini-5720](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) | ![free-5720](5720/previews/free.png) | ![maid-5720](5720/previews/maid.png) | ![miko-5720](5720/previews/miko.png) | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) | ![suit-5720](5720/previews/suit.png) | ![yukata-5720](5720/previews/yukata.png) | | 5280 | 0.973 | [Download](5280/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | ![pattern_6-5280](5280/previews/pattern_6.png) | ![pattern_7-5280](5280/previews/pattern_7.png) | ![pattern_8-5280](5280/previews/pattern_8.png) | ![pattern_9-5280](5280/previews/pattern_9.png) | ![pattern_10-5280](5280/previews/pattern_10.png) | ![pattern_11-5280](5280/previews/pattern_11.png) | ![bikini-5280](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4840 | 0.951 | [Download](4840/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-4840](4840/previews/pattern_1.png) | ![pattern_2-4840](4840/previews/pattern_2.png) | ![pattern_3-4840](4840/previews/pattern_3.png) | ![pattern_4-4840](4840/previews/pattern_4.png) | ![pattern_5-4840](4840/previews/pattern_5.png) | ![pattern_6-4840](4840/previews/pattern_6.png) | ![pattern_7-4840](4840/previews/pattern_7.png) | ![pattern_8-4840](4840/previews/pattern_8.png) | ![pattern_9-4840](4840/previews/pattern_9.png) | ![pattern_10-4840](4840/previews/pattern_10.png) | ![pattern_11-4840](4840/previews/pattern_11.png) | ![bikini-4840](4840/previews/bikini.png) | [<NSFW, click to see>](4840/previews/bondage.png) | ![free-4840](4840/previews/free.png) | ![maid-4840](4840/previews/maid.png) | ![miko-4840](4840/previews/miko.png) | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) | ![suit-4840](4840/previews/suit.png) | ![yukata-4840](4840/previews/yukata.png) | | 4400 | 0.947 | [Download](4400/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-4400](4400/previews/pattern_1.png) | ![pattern_2-4400](4400/previews/pattern_2.png) | ![pattern_3-4400](4400/previews/pattern_3.png) | ![pattern_4-4400](4400/previews/pattern_4.png) | ![pattern_5-4400](4400/previews/pattern_5.png) | ![pattern_6-4400](4400/previews/pattern_6.png) | ![pattern_7-4400](4400/previews/pattern_7.png) | ![pattern_8-4400](4400/previews/pattern_8.png) | ![pattern_9-4400](4400/previews/pattern_9.png) | ![pattern_10-4400](4400/previews/pattern_10.png) | ![pattern_11-4400](4400/previews/pattern_11.png) | ![bikini-4400](4400/previews/bikini.png) | [<NSFW, click to see>](4400/previews/bondage.png) | ![free-4400](4400/previews/free.png) | ![maid-4400](4400/previews/maid.png) | ![miko-4400](4400/previews/miko.png) | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) | ![suit-4400](4400/previews/suit.png) | ![yukata-4400](4400/previews/yukata.png) | | 3960 | 0.965 | [Download](3960/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-3960](3960/previews/pattern_1.png) | ![pattern_2-3960](3960/previews/pattern_2.png) | ![pattern_3-3960](3960/previews/pattern_3.png) | ![pattern_4-3960](3960/previews/pattern_4.png) | ![pattern_5-3960](3960/previews/pattern_5.png) | ![pattern_6-3960](3960/previews/pattern_6.png) | ![pattern_7-3960](3960/previews/pattern_7.png) | ![pattern_8-3960](3960/previews/pattern_8.png) | ![pattern_9-3960](3960/previews/pattern_9.png) | ![pattern_10-3960](3960/previews/pattern_10.png) | ![pattern_11-3960](3960/previews/pattern_11.png) | ![bikini-3960](3960/previews/bikini.png) | [<NSFW, click to see>](3960/previews/bondage.png) | ![free-3960](3960/previews/free.png) | ![maid-3960](3960/previews/maid.png) | ![miko-3960](3960/previews/miko.png) | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) | ![suit-3960](3960/previews/suit.png) | ![yukata-3960](3960/previews/yukata.png) | | 3520 | 0.928 | [Download](3520/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-3520](3520/previews/pattern_1.png) | ![pattern_2-3520](3520/previews/pattern_2.png) | ![pattern_3-3520](3520/previews/pattern_3.png) | ![pattern_4-3520](3520/previews/pattern_4.png) | ![pattern_5-3520](3520/previews/pattern_5.png) | ![pattern_6-3520](3520/previews/pattern_6.png) | ![pattern_7-3520](3520/previews/pattern_7.png) | ![pattern_8-3520](3520/previews/pattern_8.png) | ![pattern_9-3520](3520/previews/pattern_9.png) | ![pattern_10-3520](3520/previews/pattern_10.png) | ![pattern_11-3520](3520/previews/pattern_11.png) | ![bikini-3520](3520/previews/bikini.png) | [<NSFW, click to see>](3520/previews/bondage.png) | ![free-3520](3520/previews/free.png) | ![maid-3520](3520/previews/maid.png) | ![miko-3520](3520/previews/miko.png) | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) | ![suit-3520](3520/previews/suit.png) | ![yukata-3520](3520/previews/yukata.png) | | 3080 | 0.956 | [Download](3080/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-3080](3080/previews/pattern_1.png) | ![pattern_2-3080](3080/previews/pattern_2.png) | ![pattern_3-3080](3080/previews/pattern_3.png) | ![pattern_4-3080](3080/previews/pattern_4.png) | ![pattern_5-3080](3080/previews/pattern_5.png) | ![pattern_6-3080](3080/previews/pattern_6.png) | ![pattern_7-3080](3080/previews/pattern_7.png) | ![pattern_8-3080](3080/previews/pattern_8.png) | ![pattern_9-3080](3080/previews/pattern_9.png) | ![pattern_10-3080](3080/previews/pattern_10.png) | ![pattern_11-3080](3080/previews/pattern_11.png) | ![bikini-3080](3080/previews/bikini.png) | [<NSFW, click to see>](3080/previews/bondage.png) | ![free-3080](3080/previews/free.png) | ![maid-3080](3080/previews/maid.png) | ![miko-3080](3080/previews/miko.png) | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) | ![suit-3080](3080/previews/suit.png) | ![yukata-3080](3080/previews/yukata.png) | | 2640 | 0.932 | [Download](2640/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-2640](2640/previews/pattern_1.png) | ![pattern_2-2640](2640/previews/pattern_2.png) | ![pattern_3-2640](2640/previews/pattern_3.png) | ![pattern_4-2640](2640/previews/pattern_4.png) | ![pattern_5-2640](2640/previews/pattern_5.png) | ![pattern_6-2640](2640/previews/pattern_6.png) | ![pattern_7-2640](2640/previews/pattern_7.png) | ![pattern_8-2640](2640/previews/pattern_8.png) | ![pattern_9-2640](2640/previews/pattern_9.png) | ![pattern_10-2640](2640/previews/pattern_10.png) | ![pattern_11-2640](2640/previews/pattern_11.png) | ![bikini-2640](2640/previews/bikini.png) | [<NSFW, click to see>](2640/previews/bondage.png) | ![free-2640](2640/previews/free.png) | ![maid-2640](2640/previews/maid.png) | ![miko-2640](2640/previews/miko.png) | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) | ![suit-2640](2640/previews/suit.png) | ![yukata-2640](2640/previews/yukata.png) | | 2200 | 0.838 | [Download](2200/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-2200](2200/previews/pattern_1.png) | ![pattern_2-2200](2200/previews/pattern_2.png) | ![pattern_3-2200](2200/previews/pattern_3.png) | ![pattern_4-2200](2200/previews/pattern_4.png) | ![pattern_5-2200](2200/previews/pattern_5.png) | ![pattern_6-2200](2200/previews/pattern_6.png) | ![pattern_7-2200](2200/previews/pattern_7.png) | ![pattern_8-2200](2200/previews/pattern_8.png) | ![pattern_9-2200](2200/previews/pattern_9.png) | ![pattern_10-2200](2200/previews/pattern_10.png) | ![pattern_11-2200](2200/previews/pattern_11.png) | ![bikini-2200](2200/previews/bikini.png) | [<NSFW, click to see>](2200/previews/bondage.png) | ![free-2200](2200/previews/free.png) | ![maid-2200](2200/previews/maid.png) | ![miko-2200](2200/previews/miko.png) | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) | ![suit-2200](2200/previews/suit.png) | ![yukata-2200](2200/previews/yukata.png) | | 1760 | 0.829 | [Download](1760/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-1760](1760/previews/pattern_1.png) | ![pattern_2-1760](1760/previews/pattern_2.png) | ![pattern_3-1760](1760/previews/pattern_3.png) | ![pattern_4-1760](1760/previews/pattern_4.png) | ![pattern_5-1760](1760/previews/pattern_5.png) | ![pattern_6-1760](1760/previews/pattern_6.png) | ![pattern_7-1760](1760/previews/pattern_7.png) | ![pattern_8-1760](1760/previews/pattern_8.png) | ![pattern_9-1760](1760/previews/pattern_9.png) | ![pattern_10-1760](1760/previews/pattern_10.png) | ![pattern_11-1760](1760/previews/pattern_11.png) | ![bikini-1760](1760/previews/bikini.png) | [<NSFW, click to see>](1760/previews/bondage.png) | ![free-1760](1760/previews/free.png) | ![maid-1760](1760/previews/maid.png) | ![miko-1760](1760/previews/miko.png) | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) | ![suit-1760](1760/previews/suit.png) | ![yukata-1760](1760/previews/yukata.png) | | 1320 | 0.792 | [Download](1320/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-1320](1320/previews/pattern_1.png) | ![pattern_2-1320](1320/previews/pattern_2.png) | ![pattern_3-1320](1320/previews/pattern_3.png) | ![pattern_4-1320](1320/previews/pattern_4.png) | ![pattern_5-1320](1320/previews/pattern_5.png) | ![pattern_6-1320](1320/previews/pattern_6.png) | ![pattern_7-1320](1320/previews/pattern_7.png) | ![pattern_8-1320](1320/previews/pattern_8.png) | ![pattern_9-1320](1320/previews/pattern_9.png) | ![pattern_10-1320](1320/previews/pattern_10.png) | ![pattern_11-1320](1320/previews/pattern_11.png) | ![bikini-1320](1320/previews/bikini.png) | [<NSFW, click to see>](1320/previews/bondage.png) | ![free-1320](1320/previews/free.png) | ![maid-1320](1320/previews/maid.png) | ![miko-1320](1320/previews/miko.png) | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) | ![suit-1320](1320/previews/suit.png) | ![yukata-1320](1320/previews/yukata.png) | | 880 | 0.769 | [Download](880/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-880](880/previews/pattern_1.png) | ![pattern_2-880](880/previews/pattern_2.png) | ![pattern_3-880](880/previews/pattern_3.png) | ![pattern_4-880](880/previews/pattern_4.png) | ![pattern_5-880](880/previews/pattern_5.png) | ![pattern_6-880](880/previews/pattern_6.png) | ![pattern_7-880](880/previews/pattern_7.png) | ![pattern_8-880](880/previews/pattern_8.png) | ![pattern_9-880](880/previews/pattern_9.png) | ![pattern_10-880](880/previews/pattern_10.png) | ![pattern_11-880](880/previews/pattern_11.png) | ![bikini-880](880/previews/bikini.png) | [<NSFW, click to see>](880/previews/bondage.png) | ![free-880](880/previews/free.png) | ![maid-880](880/previews/maid.png) | ![miko-880](880/previews/miko.png) | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) | ![suit-880](880/previews/suit.png) | ![yukata-880](880/previews/yukata.png) | | 440 | 0.601 | [Download](440/cure_butterfly_hirogaruskyprecure.zip) | ![pattern_1-440](440/previews/pattern_1.png) | ![pattern_2-440](440/previews/pattern_2.png) | ![pattern_3-440](440/previews/pattern_3.png) | ![pattern_4-440](440/previews/pattern_4.png) | ![pattern_5-440](440/previews/pattern_5.png) | ![pattern_6-440](440/previews/pattern_6.png) | ![pattern_7-440](440/previews/pattern_7.png) | ![pattern_8-440](440/previews/pattern_8.png) | ![pattern_9-440](440/previews/pattern_9.png) | ![pattern_10-440](440/previews/pattern_10.png) | ![pattern_11-440](440/previews/pattern_11.png) | ![bikini-440](440/previews/bikini.png) | [<NSFW, click to see>](440/previews/bondage.png) | ![free-440](440/previews/free.png) | ![maid-440](440/previews/maid.png) | ![miko-440](440/previews/miko.png) | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) | ![suit-440](440/previews/suit.png) | ![yukata-440](440/previews/yukata.png) |
CyberHarem/cure_prism_hirogaruskyprecure
CyberHarem
2023-09-23T01:56:55Z
0
1
null
[ "art", "text-to-image", "dataset:CyberHarem/cure_prism_hirogaruskyprecure", "license:mit", "region:us" ]
text-to-image
2023-09-04T13:19:56Z
--- license: mit datasets: - CyberHarem/cure_prism_hirogaruskyprecure pipeline_tag: text-to-image tags: - art --- # Lora of cure_prism_hirogaruskyprecure This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 6440, you need to download `6440/cure_prism_hirogaruskyprecure.pt` as the embedding and `6440/cure_prism_hirogaruskyprecure.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 6440**, with the score of 0.838. The trigger words are: 1. `cure_prism_hirogaruskyprecure` 2. `long_hair, green_eyes, pink_hair, bow, open_mouth, bangs, smile, white_bow` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6900 | 0.804 | [Download](6900/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-6900](6900/previews/pattern_1.png) | ![pattern_2-6900](6900/previews/pattern_2.png) | ![pattern_3-6900](6900/previews/pattern_3.png) | ![pattern_4-6900](6900/previews/pattern_4.png) | ![pattern_5-6900](6900/previews/pattern_5.png) | ![pattern_6-6900](6900/previews/pattern_6.png) | ![pattern_7-6900](6900/previews/pattern_7.png) | ![pattern_8-6900](6900/previews/pattern_8.png) | ![pattern_9-6900](6900/previews/pattern_9.png) | ![pattern_10-6900](6900/previews/pattern_10.png) | ![pattern_11-6900](6900/previews/pattern_11.png) | ![pattern_12-6900](6900/previews/pattern_12.png) | ![pattern_13-6900](6900/previews/pattern_13.png) | ![pattern_14-6900](6900/previews/pattern_14.png) | ![bikini-6900](6900/previews/bikini.png) | [<NSFW, click to see>](6900/previews/bondage.png) | ![free-6900](6900/previews/free.png) | ![maid-6900](6900/previews/maid.png) | ![miko-6900](6900/previews/miko.png) | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) | ![suit-6900](6900/previews/suit.png) | ![yukata-6900](6900/previews/yukata.png) | | **6440** | **0.838** | [**Download**](6440/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-6440](6440/previews/pattern_1.png) | ![pattern_2-6440](6440/previews/pattern_2.png) | ![pattern_3-6440](6440/previews/pattern_3.png) | ![pattern_4-6440](6440/previews/pattern_4.png) | ![pattern_5-6440](6440/previews/pattern_5.png) | ![pattern_6-6440](6440/previews/pattern_6.png) | ![pattern_7-6440](6440/previews/pattern_7.png) | ![pattern_8-6440](6440/previews/pattern_8.png) | ![pattern_9-6440](6440/previews/pattern_9.png) | ![pattern_10-6440](6440/previews/pattern_10.png) | ![pattern_11-6440](6440/previews/pattern_11.png) | ![pattern_12-6440](6440/previews/pattern_12.png) | ![pattern_13-6440](6440/previews/pattern_13.png) | ![pattern_14-6440](6440/previews/pattern_14.png) | ![bikini-6440](6440/previews/bikini.png) | [<NSFW, click to see>](6440/previews/bondage.png) | ![free-6440](6440/previews/free.png) | ![maid-6440](6440/previews/maid.png) | ![miko-6440](6440/previews/miko.png) | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) | ![suit-6440](6440/previews/suit.png) | ![yukata-6440](6440/previews/yukata.png) | | 5980 | 0.806 | [Download](5980/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-5980](5980/previews/pattern_1.png) | ![pattern_2-5980](5980/previews/pattern_2.png) | ![pattern_3-5980](5980/previews/pattern_3.png) | ![pattern_4-5980](5980/previews/pattern_4.png) | ![pattern_5-5980](5980/previews/pattern_5.png) | ![pattern_6-5980](5980/previews/pattern_6.png) | ![pattern_7-5980](5980/previews/pattern_7.png) | ![pattern_8-5980](5980/previews/pattern_8.png) | ![pattern_9-5980](5980/previews/pattern_9.png) | ![pattern_10-5980](5980/previews/pattern_10.png) | ![pattern_11-5980](5980/previews/pattern_11.png) | ![pattern_12-5980](5980/previews/pattern_12.png) | ![pattern_13-5980](5980/previews/pattern_13.png) | ![pattern_14-5980](5980/previews/pattern_14.png) | ![bikini-5980](5980/previews/bikini.png) | [<NSFW, click to see>](5980/previews/bondage.png) | ![free-5980](5980/previews/free.png) | ![maid-5980](5980/previews/maid.png) | ![miko-5980](5980/previews/miko.png) | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) | ![suit-5980](5980/previews/suit.png) | ![yukata-5980](5980/previews/yukata.png) | | 5520 | 0.789 | [Download](5520/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-5520](5520/previews/pattern_1.png) | ![pattern_2-5520](5520/previews/pattern_2.png) | ![pattern_3-5520](5520/previews/pattern_3.png) | ![pattern_4-5520](5520/previews/pattern_4.png) | ![pattern_5-5520](5520/previews/pattern_5.png) | ![pattern_6-5520](5520/previews/pattern_6.png) | ![pattern_7-5520](5520/previews/pattern_7.png) | ![pattern_8-5520](5520/previews/pattern_8.png) | ![pattern_9-5520](5520/previews/pattern_9.png) | ![pattern_10-5520](5520/previews/pattern_10.png) | ![pattern_11-5520](5520/previews/pattern_11.png) | ![pattern_12-5520](5520/previews/pattern_12.png) | ![pattern_13-5520](5520/previews/pattern_13.png) | ![pattern_14-5520](5520/previews/pattern_14.png) | ![bikini-5520](5520/previews/bikini.png) | [<NSFW, click to see>](5520/previews/bondage.png) | ![free-5520](5520/previews/free.png) | ![maid-5520](5520/previews/maid.png) | ![miko-5520](5520/previews/miko.png) | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) | ![suit-5520](5520/previews/suit.png) | ![yukata-5520](5520/previews/yukata.png) | | 5060 | 0.835 | [Download](5060/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-5060](5060/previews/pattern_1.png) | ![pattern_2-5060](5060/previews/pattern_2.png) | ![pattern_3-5060](5060/previews/pattern_3.png) | ![pattern_4-5060](5060/previews/pattern_4.png) | ![pattern_5-5060](5060/previews/pattern_5.png) | ![pattern_6-5060](5060/previews/pattern_6.png) | ![pattern_7-5060](5060/previews/pattern_7.png) | ![pattern_8-5060](5060/previews/pattern_8.png) | ![pattern_9-5060](5060/previews/pattern_9.png) | ![pattern_10-5060](5060/previews/pattern_10.png) | ![pattern_11-5060](5060/previews/pattern_11.png) | ![pattern_12-5060](5060/previews/pattern_12.png) | ![pattern_13-5060](5060/previews/pattern_13.png) | ![pattern_14-5060](5060/previews/pattern_14.png) | ![bikini-5060](5060/previews/bikini.png) | [<NSFW, click to see>](5060/previews/bondage.png) | ![free-5060](5060/previews/free.png) | ![maid-5060](5060/previews/maid.png) | ![miko-5060](5060/previews/miko.png) | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) | ![suit-5060](5060/previews/suit.png) | ![yukata-5060](5060/previews/yukata.png) | | 4600 | 0.820 | [Download](4600/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-4600](4600/previews/pattern_1.png) | ![pattern_2-4600](4600/previews/pattern_2.png) | ![pattern_3-4600](4600/previews/pattern_3.png) | ![pattern_4-4600](4600/previews/pattern_4.png) | ![pattern_5-4600](4600/previews/pattern_5.png) | ![pattern_6-4600](4600/previews/pattern_6.png) | ![pattern_7-4600](4600/previews/pattern_7.png) | ![pattern_8-4600](4600/previews/pattern_8.png) | ![pattern_9-4600](4600/previews/pattern_9.png) | ![pattern_10-4600](4600/previews/pattern_10.png) | ![pattern_11-4600](4600/previews/pattern_11.png) | ![pattern_12-4600](4600/previews/pattern_12.png) | ![pattern_13-4600](4600/previews/pattern_13.png) | ![pattern_14-4600](4600/previews/pattern_14.png) | ![bikini-4600](4600/previews/bikini.png) | [<NSFW, click to see>](4600/previews/bondage.png) | ![free-4600](4600/previews/free.png) | ![maid-4600](4600/previews/maid.png) | ![miko-4600](4600/previews/miko.png) | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) | ![suit-4600](4600/previews/suit.png) | ![yukata-4600](4600/previews/yukata.png) | | 4140 | 0.813 | [Download](4140/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-4140](4140/previews/pattern_1.png) | ![pattern_2-4140](4140/previews/pattern_2.png) | ![pattern_3-4140](4140/previews/pattern_3.png) | ![pattern_4-4140](4140/previews/pattern_4.png) | ![pattern_5-4140](4140/previews/pattern_5.png) | ![pattern_6-4140](4140/previews/pattern_6.png) | ![pattern_7-4140](4140/previews/pattern_7.png) | ![pattern_8-4140](4140/previews/pattern_8.png) | ![pattern_9-4140](4140/previews/pattern_9.png) | ![pattern_10-4140](4140/previews/pattern_10.png) | ![pattern_11-4140](4140/previews/pattern_11.png) | ![pattern_12-4140](4140/previews/pattern_12.png) | ![pattern_13-4140](4140/previews/pattern_13.png) | ![pattern_14-4140](4140/previews/pattern_14.png) | ![bikini-4140](4140/previews/bikini.png) | [<NSFW, click to see>](4140/previews/bondage.png) | ![free-4140](4140/previews/free.png) | ![maid-4140](4140/previews/maid.png) | ![miko-4140](4140/previews/miko.png) | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) | ![suit-4140](4140/previews/suit.png) | ![yukata-4140](4140/previews/yukata.png) | | 3680 | 0.776 | [Download](3680/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-3680](3680/previews/pattern_1.png) | ![pattern_2-3680](3680/previews/pattern_2.png) | ![pattern_3-3680](3680/previews/pattern_3.png) | ![pattern_4-3680](3680/previews/pattern_4.png) | ![pattern_5-3680](3680/previews/pattern_5.png) | ![pattern_6-3680](3680/previews/pattern_6.png) | ![pattern_7-3680](3680/previews/pattern_7.png) | ![pattern_8-3680](3680/previews/pattern_8.png) | ![pattern_9-3680](3680/previews/pattern_9.png) | ![pattern_10-3680](3680/previews/pattern_10.png) | ![pattern_11-3680](3680/previews/pattern_11.png) | ![pattern_12-3680](3680/previews/pattern_12.png) | ![pattern_13-3680](3680/previews/pattern_13.png) | ![pattern_14-3680](3680/previews/pattern_14.png) | ![bikini-3680](3680/previews/bikini.png) | [<NSFW, click to see>](3680/previews/bondage.png) | ![free-3680](3680/previews/free.png) | ![maid-3680](3680/previews/maid.png) | ![miko-3680](3680/previews/miko.png) | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) | ![suit-3680](3680/previews/suit.png) | ![yukata-3680](3680/previews/yukata.png) | | 3220 | 0.689 | [Download](3220/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-3220](3220/previews/pattern_1.png) | ![pattern_2-3220](3220/previews/pattern_2.png) | ![pattern_3-3220](3220/previews/pattern_3.png) | ![pattern_4-3220](3220/previews/pattern_4.png) | ![pattern_5-3220](3220/previews/pattern_5.png) | ![pattern_6-3220](3220/previews/pattern_6.png) | ![pattern_7-3220](3220/previews/pattern_7.png) | ![pattern_8-3220](3220/previews/pattern_8.png) | ![pattern_9-3220](3220/previews/pattern_9.png) | ![pattern_10-3220](3220/previews/pattern_10.png) | ![pattern_11-3220](3220/previews/pattern_11.png) | ![pattern_12-3220](3220/previews/pattern_12.png) | ![pattern_13-3220](3220/previews/pattern_13.png) | ![pattern_14-3220](3220/previews/pattern_14.png) | ![bikini-3220](3220/previews/bikini.png) | [<NSFW, click to see>](3220/previews/bondage.png) | ![free-3220](3220/previews/free.png) | ![maid-3220](3220/previews/maid.png) | ![miko-3220](3220/previews/miko.png) | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) | ![suit-3220](3220/previews/suit.png) | ![yukata-3220](3220/previews/yukata.png) | | 2760 | 0.719 | [Download](2760/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-2760](2760/previews/pattern_1.png) | ![pattern_2-2760](2760/previews/pattern_2.png) | ![pattern_3-2760](2760/previews/pattern_3.png) | ![pattern_4-2760](2760/previews/pattern_4.png) | ![pattern_5-2760](2760/previews/pattern_5.png) | ![pattern_6-2760](2760/previews/pattern_6.png) | ![pattern_7-2760](2760/previews/pattern_7.png) | ![pattern_8-2760](2760/previews/pattern_8.png) | ![pattern_9-2760](2760/previews/pattern_9.png) | ![pattern_10-2760](2760/previews/pattern_10.png) | ![pattern_11-2760](2760/previews/pattern_11.png) | ![pattern_12-2760](2760/previews/pattern_12.png) | ![pattern_13-2760](2760/previews/pattern_13.png) | ![pattern_14-2760](2760/previews/pattern_14.png) | ![bikini-2760](2760/previews/bikini.png) | [<NSFW, click to see>](2760/previews/bondage.png) | ![free-2760](2760/previews/free.png) | ![maid-2760](2760/previews/maid.png) | ![miko-2760](2760/previews/miko.png) | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) | ![suit-2760](2760/previews/suit.png) | ![yukata-2760](2760/previews/yukata.png) | | 2300 | 0.745 | [Download](2300/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-2300](2300/previews/pattern_1.png) | ![pattern_2-2300](2300/previews/pattern_2.png) | ![pattern_3-2300](2300/previews/pattern_3.png) | ![pattern_4-2300](2300/previews/pattern_4.png) | ![pattern_5-2300](2300/previews/pattern_5.png) | ![pattern_6-2300](2300/previews/pattern_6.png) | ![pattern_7-2300](2300/previews/pattern_7.png) | ![pattern_8-2300](2300/previews/pattern_8.png) | ![pattern_9-2300](2300/previews/pattern_9.png) | ![pattern_10-2300](2300/previews/pattern_10.png) | ![pattern_11-2300](2300/previews/pattern_11.png) | ![pattern_12-2300](2300/previews/pattern_12.png) | ![pattern_13-2300](2300/previews/pattern_13.png) | ![pattern_14-2300](2300/previews/pattern_14.png) | ![bikini-2300](2300/previews/bikini.png) | [<NSFW, click to see>](2300/previews/bondage.png) | ![free-2300](2300/previews/free.png) | ![maid-2300](2300/previews/maid.png) | ![miko-2300](2300/previews/miko.png) | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) | ![suit-2300](2300/previews/suit.png) | ![yukata-2300](2300/previews/yukata.png) | | 1840 | 0.745 | [Download](1840/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-1840](1840/previews/pattern_1.png) | ![pattern_2-1840](1840/previews/pattern_2.png) | ![pattern_3-1840](1840/previews/pattern_3.png) | ![pattern_4-1840](1840/previews/pattern_4.png) | ![pattern_5-1840](1840/previews/pattern_5.png) | ![pattern_6-1840](1840/previews/pattern_6.png) | ![pattern_7-1840](1840/previews/pattern_7.png) | ![pattern_8-1840](1840/previews/pattern_8.png) | ![pattern_9-1840](1840/previews/pattern_9.png) | ![pattern_10-1840](1840/previews/pattern_10.png) | ![pattern_11-1840](1840/previews/pattern_11.png) | ![pattern_12-1840](1840/previews/pattern_12.png) | ![pattern_13-1840](1840/previews/pattern_13.png) | ![pattern_14-1840](1840/previews/pattern_14.png) | ![bikini-1840](1840/previews/bikini.png) | [<NSFW, click to see>](1840/previews/bondage.png) | ![free-1840](1840/previews/free.png) | ![maid-1840](1840/previews/maid.png) | ![miko-1840](1840/previews/miko.png) | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) | ![suit-1840](1840/previews/suit.png) | ![yukata-1840](1840/previews/yukata.png) | | 1380 | 0.751 | [Download](1380/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-1380](1380/previews/pattern_1.png) | ![pattern_2-1380](1380/previews/pattern_2.png) | ![pattern_3-1380](1380/previews/pattern_3.png) | ![pattern_4-1380](1380/previews/pattern_4.png) | ![pattern_5-1380](1380/previews/pattern_5.png) | ![pattern_6-1380](1380/previews/pattern_6.png) | ![pattern_7-1380](1380/previews/pattern_7.png) | ![pattern_8-1380](1380/previews/pattern_8.png) | ![pattern_9-1380](1380/previews/pattern_9.png) | ![pattern_10-1380](1380/previews/pattern_10.png) | ![pattern_11-1380](1380/previews/pattern_11.png) | ![pattern_12-1380](1380/previews/pattern_12.png) | ![pattern_13-1380](1380/previews/pattern_13.png) | ![pattern_14-1380](1380/previews/pattern_14.png) | ![bikini-1380](1380/previews/bikini.png) | [<NSFW, click to see>](1380/previews/bondage.png) | ![free-1380](1380/previews/free.png) | ![maid-1380](1380/previews/maid.png) | ![miko-1380](1380/previews/miko.png) | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) | ![suit-1380](1380/previews/suit.png) | ![yukata-1380](1380/previews/yukata.png) | | 920 | 0.679 | [Download](920/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-920](920/previews/pattern_1.png) | ![pattern_2-920](920/previews/pattern_2.png) | ![pattern_3-920](920/previews/pattern_3.png) | ![pattern_4-920](920/previews/pattern_4.png) | ![pattern_5-920](920/previews/pattern_5.png) | ![pattern_6-920](920/previews/pattern_6.png) | ![pattern_7-920](920/previews/pattern_7.png) | ![pattern_8-920](920/previews/pattern_8.png) | ![pattern_9-920](920/previews/pattern_9.png) | ![pattern_10-920](920/previews/pattern_10.png) | ![pattern_11-920](920/previews/pattern_11.png) | ![pattern_12-920](920/previews/pattern_12.png) | ![pattern_13-920](920/previews/pattern_13.png) | ![pattern_14-920](920/previews/pattern_14.png) | ![bikini-920](920/previews/bikini.png) | [<NSFW, click to see>](920/previews/bondage.png) | ![free-920](920/previews/free.png) | ![maid-920](920/previews/maid.png) | ![miko-920](920/previews/miko.png) | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) | ![suit-920](920/previews/suit.png) | ![yukata-920](920/previews/yukata.png) | | 460 | 0.735 | [Download](460/cure_prism_hirogaruskyprecure.zip) | ![pattern_1-460](460/previews/pattern_1.png) | ![pattern_2-460](460/previews/pattern_2.png) | ![pattern_3-460](460/previews/pattern_3.png) | ![pattern_4-460](460/previews/pattern_4.png) | ![pattern_5-460](460/previews/pattern_5.png) | ![pattern_6-460](460/previews/pattern_6.png) | ![pattern_7-460](460/previews/pattern_7.png) | ![pattern_8-460](460/previews/pattern_8.png) | ![pattern_9-460](460/previews/pattern_9.png) | ![pattern_10-460](460/previews/pattern_10.png) | ![pattern_11-460](460/previews/pattern_11.png) | ![pattern_12-460](460/previews/pattern_12.png) | ![pattern_13-460](460/previews/pattern_13.png) | ![pattern_14-460](460/previews/pattern_14.png) | ![bikini-460](460/previews/bikini.png) | [<NSFW, click to see>](460/previews/bondage.png) | ![free-460](460/previews/free.png) | ![maid-460](460/previews/maid.png) | ![miko-460](460/previews/miko.png) | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) | ![suit-460](460/previews/suit.png) | ![yukata-460](460/previews/yukata.png) |
CyberHarem/shinohara_rei_idolmastercinderellagirls
CyberHarem
2023-09-23T01:43:05Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/shinohara_rei_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-23T01:32:58Z
--- license: mit datasets: - CyberHarem/shinohara_rei_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of shinohara_rei_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4080, you need to download `4080/shinohara_rei_idolmastercinderellagirls.pt` as the embedding and `4080/shinohara_rei_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4080**, with the score of 1.000. The trigger words are: 1. `shinohara_rei_idolmastercinderellagirls` 2. `brown_hair, green_eyes, short_hair, jewelry, earrings, breasts, cleavage, large_breasts, necklace, smile, blush` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.999 | [Download](5100/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.999 | [Download](4760/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.999 | [Download](4420/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | **4080** | **1.000** | [**Download**](4080/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.991 | [Download](3740/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.971 | [Download](3400/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.983 | [Download](3060/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.980 | [Download](2720/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.952 | [Download](2380/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.880 | [Download](2040/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.951 | [Download](1700/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.912 | [Download](1360/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.955 | [Download](1020/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.916 | [Download](680/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.658 | [Download](340/shinohara_rei_idolmastercinderellagirls.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
CyberHarem/chihaya_anon_bangdreamitsmygo
CyberHarem
2023-09-23T01:40:16Z
0
1
null
[ "art", "text-to-image", "dataset:CyberHarem/chihaya_anon_bangdreamitsmygo", "license:mit", "region:us" ]
text-to-image
2023-09-02T19:39:44Z
--- license: mit datasets: - CyberHarem/chihaya_anon_bangdreamitsmygo pipeline_tag: text-to-image tags: - art --- # Lora of chihaya_anon_bangdreamitsmygo This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 7200, you need to download `7200/chihaya_anon_bangdreamitsmygo.pt` as the embedding and `7200/chihaya_anon_bangdreamitsmygo.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 7200**, with the score of 0.977. The trigger words are: 1. `chihaya_anon_bangdreamitsmygo` 2. `long_hair, pink_hair, bangs, blush, necktie, grey_eyes, blazer, open_mouth, upper_body, indoors` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | **7200** | **0.977** | [**Download**](7200/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-7200](7200/previews/pattern_1.png) | ![pattern_2-7200](7200/previews/pattern_2.png) | ![pattern_3-7200](7200/previews/pattern_3.png) | ![pattern_4-7200](7200/previews/pattern_4.png) | ![pattern_5-7200](7200/previews/pattern_5.png) | ![pattern_6-7200](7200/previews/pattern_6.png) | ![pattern_7-7200](7200/previews/pattern_7.png) | ![pattern_8-7200](7200/previews/pattern_8.png) | ![pattern_9-7200](7200/previews/pattern_9.png) | ![pattern_10-7200](7200/previews/pattern_10.png) | ![pattern_11-7200](7200/previews/pattern_11.png) | ![bikini-7200](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) | ![free-7200](7200/previews/free.png) | ![maid-7200](7200/previews/maid.png) | ![miko-7200](7200/previews/miko.png) | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) | ![suit-7200](7200/previews/suit.png) | ![yukata-7200](7200/previews/yukata.png) | | 6720 | 0.973 | [Download](6720/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-6720](6720/previews/pattern_1.png) | ![pattern_2-6720](6720/previews/pattern_2.png) | ![pattern_3-6720](6720/previews/pattern_3.png) | ![pattern_4-6720](6720/previews/pattern_4.png) | ![pattern_5-6720](6720/previews/pattern_5.png) | ![pattern_6-6720](6720/previews/pattern_6.png) | ![pattern_7-6720](6720/previews/pattern_7.png) | ![pattern_8-6720](6720/previews/pattern_8.png) | ![pattern_9-6720](6720/previews/pattern_9.png) | ![pattern_10-6720](6720/previews/pattern_10.png) | ![pattern_11-6720](6720/previews/pattern_11.png) | ![bikini-6720](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) | ![free-6720](6720/previews/free.png) | ![maid-6720](6720/previews/maid.png) | ![miko-6720](6720/previews/miko.png) | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) | ![suit-6720](6720/previews/suit.png) | ![yukata-6720](6720/previews/yukata.png) | | 6240 | 0.976 | [Download](6240/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-6240](6240/previews/pattern_1.png) | ![pattern_2-6240](6240/previews/pattern_2.png) | ![pattern_3-6240](6240/previews/pattern_3.png) | ![pattern_4-6240](6240/previews/pattern_4.png) | ![pattern_5-6240](6240/previews/pattern_5.png) | ![pattern_6-6240](6240/previews/pattern_6.png) | ![pattern_7-6240](6240/previews/pattern_7.png) | ![pattern_8-6240](6240/previews/pattern_8.png) | ![pattern_9-6240](6240/previews/pattern_9.png) | ![pattern_10-6240](6240/previews/pattern_10.png) | ![pattern_11-6240](6240/previews/pattern_11.png) | ![bikini-6240](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) | ![free-6240](6240/previews/free.png) | ![maid-6240](6240/previews/maid.png) | ![miko-6240](6240/previews/miko.png) | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) | ![suit-6240](6240/previews/suit.png) | ![yukata-6240](6240/previews/yukata.png) | | 5760 | 0.974 | [Download](5760/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-5760](5760/previews/pattern_1.png) | ![pattern_2-5760](5760/previews/pattern_2.png) | ![pattern_3-5760](5760/previews/pattern_3.png) | ![pattern_4-5760](5760/previews/pattern_4.png) | ![pattern_5-5760](5760/previews/pattern_5.png) | ![pattern_6-5760](5760/previews/pattern_6.png) | ![pattern_7-5760](5760/previews/pattern_7.png) | ![pattern_8-5760](5760/previews/pattern_8.png) | ![pattern_9-5760](5760/previews/pattern_9.png) | ![pattern_10-5760](5760/previews/pattern_10.png) | ![pattern_11-5760](5760/previews/pattern_11.png) | ![bikini-5760](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) | ![free-5760](5760/previews/free.png) | ![maid-5760](5760/previews/maid.png) | ![miko-5760](5760/previews/miko.png) | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) | ![suit-5760](5760/previews/suit.png) | ![yukata-5760](5760/previews/yukata.png) | | 5280 | 0.928 | [Download](5280/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | ![pattern_6-5280](5280/previews/pattern_6.png) | ![pattern_7-5280](5280/previews/pattern_7.png) | ![pattern_8-5280](5280/previews/pattern_8.png) | ![pattern_9-5280](5280/previews/pattern_9.png) | ![pattern_10-5280](5280/previews/pattern_10.png) | ![pattern_11-5280](5280/previews/pattern_11.png) | ![bikini-5280](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4800 | 0.975 | [Download](4800/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-4800](4800/previews/pattern_1.png) | ![pattern_2-4800](4800/previews/pattern_2.png) | ![pattern_3-4800](4800/previews/pattern_3.png) | ![pattern_4-4800](4800/previews/pattern_4.png) | ![pattern_5-4800](4800/previews/pattern_5.png) | ![pattern_6-4800](4800/previews/pattern_6.png) | ![pattern_7-4800](4800/previews/pattern_7.png) | ![pattern_8-4800](4800/previews/pattern_8.png) | ![pattern_9-4800](4800/previews/pattern_9.png) | ![pattern_10-4800](4800/previews/pattern_10.png) | ![pattern_11-4800](4800/previews/pattern_11.png) | ![bikini-4800](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) | ![free-4800](4800/previews/free.png) | ![maid-4800](4800/previews/maid.png) | ![miko-4800](4800/previews/miko.png) | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) | ![suit-4800](4800/previews/suit.png) | ![yukata-4800](4800/previews/yukata.png) | | 4320 | 0.927 | [Download](4320/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-4320](4320/previews/pattern_1.png) | ![pattern_2-4320](4320/previews/pattern_2.png) | ![pattern_3-4320](4320/previews/pattern_3.png) | ![pattern_4-4320](4320/previews/pattern_4.png) | ![pattern_5-4320](4320/previews/pattern_5.png) | ![pattern_6-4320](4320/previews/pattern_6.png) | ![pattern_7-4320](4320/previews/pattern_7.png) | ![pattern_8-4320](4320/previews/pattern_8.png) | ![pattern_9-4320](4320/previews/pattern_9.png) | ![pattern_10-4320](4320/previews/pattern_10.png) | ![pattern_11-4320](4320/previews/pattern_11.png) | ![bikini-4320](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | ![free-4320](4320/previews/free.png) | ![maid-4320](4320/previews/maid.png) | ![miko-4320](4320/previews/miko.png) | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) | ![suit-4320](4320/previews/suit.png) | ![yukata-4320](4320/previews/yukata.png) | | 3840 | 0.973 | [Download](3840/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-3840](3840/previews/pattern_1.png) | ![pattern_2-3840](3840/previews/pattern_2.png) | ![pattern_3-3840](3840/previews/pattern_3.png) | ![pattern_4-3840](3840/previews/pattern_4.png) | ![pattern_5-3840](3840/previews/pattern_5.png) | ![pattern_6-3840](3840/previews/pattern_6.png) | ![pattern_7-3840](3840/previews/pattern_7.png) | ![pattern_8-3840](3840/previews/pattern_8.png) | ![pattern_9-3840](3840/previews/pattern_9.png) | ![pattern_10-3840](3840/previews/pattern_10.png) | ![pattern_11-3840](3840/previews/pattern_11.png) | ![bikini-3840](3840/previews/bikini.png) | [<NSFW, click to see>](3840/previews/bondage.png) | ![free-3840](3840/previews/free.png) | ![maid-3840](3840/previews/maid.png) | ![miko-3840](3840/previews/miko.png) | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) | ![suit-3840](3840/previews/suit.png) | ![yukata-3840](3840/previews/yukata.png) | | 3360 | 0.975 | [Download](3360/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-3360](3360/previews/pattern_1.png) | ![pattern_2-3360](3360/previews/pattern_2.png) | ![pattern_3-3360](3360/previews/pattern_3.png) | ![pattern_4-3360](3360/previews/pattern_4.png) | ![pattern_5-3360](3360/previews/pattern_5.png) | ![pattern_6-3360](3360/previews/pattern_6.png) | ![pattern_7-3360](3360/previews/pattern_7.png) | ![pattern_8-3360](3360/previews/pattern_8.png) | ![pattern_9-3360](3360/previews/pattern_9.png) | ![pattern_10-3360](3360/previews/pattern_10.png) | ![pattern_11-3360](3360/previews/pattern_11.png) | ![bikini-3360](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) | ![free-3360](3360/previews/free.png) | ![maid-3360](3360/previews/maid.png) | ![miko-3360](3360/previews/miko.png) | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) | ![suit-3360](3360/previews/suit.png) | ![yukata-3360](3360/previews/yukata.png) | | 2880 | 0.879 | [Download](2880/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-2880](2880/previews/pattern_1.png) | ![pattern_2-2880](2880/previews/pattern_2.png) | ![pattern_3-2880](2880/previews/pattern_3.png) | ![pattern_4-2880](2880/previews/pattern_4.png) | ![pattern_5-2880](2880/previews/pattern_5.png) | ![pattern_6-2880](2880/previews/pattern_6.png) | ![pattern_7-2880](2880/previews/pattern_7.png) | ![pattern_8-2880](2880/previews/pattern_8.png) | ![pattern_9-2880](2880/previews/pattern_9.png) | ![pattern_10-2880](2880/previews/pattern_10.png) | ![pattern_11-2880](2880/previews/pattern_11.png) | ![bikini-2880](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | ![free-2880](2880/previews/free.png) | ![maid-2880](2880/previews/maid.png) | ![miko-2880](2880/previews/miko.png) | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) | ![suit-2880](2880/previews/suit.png) | ![yukata-2880](2880/previews/yukata.png) | | 2400 | 0.972 | [Download](2400/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![pattern_5-2400](2400/previews/pattern_5.png) | ![pattern_6-2400](2400/previews/pattern_6.png) | ![pattern_7-2400](2400/previews/pattern_7.png) | ![pattern_8-2400](2400/previews/pattern_8.png) | ![pattern_9-2400](2400/previews/pattern_9.png) | ![pattern_10-2400](2400/previews/pattern_10.png) | ![pattern_11-2400](2400/previews/pattern_11.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 1920 | 0.968 | [Download](1920/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-1920](1920/previews/pattern_1.png) | ![pattern_2-1920](1920/previews/pattern_2.png) | ![pattern_3-1920](1920/previews/pattern_3.png) | ![pattern_4-1920](1920/previews/pattern_4.png) | ![pattern_5-1920](1920/previews/pattern_5.png) | ![pattern_6-1920](1920/previews/pattern_6.png) | ![pattern_7-1920](1920/previews/pattern_7.png) | ![pattern_8-1920](1920/previews/pattern_8.png) | ![pattern_9-1920](1920/previews/pattern_9.png) | ![pattern_10-1920](1920/previews/pattern_10.png) | ![pattern_11-1920](1920/previews/pattern_11.png) | ![bikini-1920](1920/previews/bikini.png) | [<NSFW, click to see>](1920/previews/bondage.png) | ![free-1920](1920/previews/free.png) | ![maid-1920](1920/previews/maid.png) | ![miko-1920](1920/previews/miko.png) | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) | ![suit-1920](1920/previews/suit.png) | ![yukata-1920](1920/previews/yukata.png) | | 1440 | 0.966 | [Download](1440/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-1440](1440/previews/pattern_1.png) | ![pattern_2-1440](1440/previews/pattern_2.png) | ![pattern_3-1440](1440/previews/pattern_3.png) | ![pattern_4-1440](1440/previews/pattern_4.png) | ![pattern_5-1440](1440/previews/pattern_5.png) | ![pattern_6-1440](1440/previews/pattern_6.png) | ![pattern_7-1440](1440/previews/pattern_7.png) | ![pattern_8-1440](1440/previews/pattern_8.png) | ![pattern_9-1440](1440/previews/pattern_9.png) | ![pattern_10-1440](1440/previews/pattern_10.png) | ![pattern_11-1440](1440/previews/pattern_11.png) | ![bikini-1440](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | ![free-1440](1440/previews/free.png) | ![maid-1440](1440/previews/maid.png) | ![miko-1440](1440/previews/miko.png) | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) | ![suit-1440](1440/previews/suit.png) | ![yukata-1440](1440/previews/yukata.png) | | 960 | 0.963 | [Download](960/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-960](960/previews/pattern_1.png) | ![pattern_2-960](960/previews/pattern_2.png) | ![pattern_3-960](960/previews/pattern_3.png) | ![pattern_4-960](960/previews/pattern_4.png) | ![pattern_5-960](960/previews/pattern_5.png) | ![pattern_6-960](960/previews/pattern_6.png) | ![pattern_7-960](960/previews/pattern_7.png) | ![pattern_8-960](960/previews/pattern_8.png) | ![pattern_9-960](960/previews/pattern_9.png) | ![pattern_10-960](960/previews/pattern_10.png) | ![pattern_11-960](960/previews/pattern_11.png) | ![bikini-960](960/previews/bikini.png) | [<NSFW, click to see>](960/previews/bondage.png) | ![free-960](960/previews/free.png) | ![maid-960](960/previews/maid.png) | ![miko-960](960/previews/miko.png) | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) | ![suit-960](960/previews/suit.png) | ![yukata-960](960/previews/yukata.png) | | 480 | 0.944 | [Download](480/chihaya_anon_bangdreamitsmygo.zip) | ![pattern_1-480](480/previews/pattern_1.png) | ![pattern_2-480](480/previews/pattern_2.png) | ![pattern_3-480](480/previews/pattern_3.png) | ![pattern_4-480](480/previews/pattern_4.png) | ![pattern_5-480](480/previews/pattern_5.png) | ![pattern_6-480](480/previews/pattern_6.png) | ![pattern_7-480](480/previews/pattern_7.png) | ![pattern_8-480](480/previews/pattern_8.png) | ![pattern_9-480](480/previews/pattern_9.png) | ![pattern_10-480](480/previews/pattern_10.png) | ![pattern_11-480](480/previews/pattern_11.png) | ![bikini-480](480/previews/bikini.png) | [<NSFW, click to see>](480/previews/bondage.png) | ![free-480](480/previews/free.png) | ![maid-480](480/previews/maid.png) | ![miko-480](480/previews/miko.png) | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) | ![suit-480](480/previews/suit.png) | ![yukata-480](480/previews/yukata.png) |
OpenDILabCommunity/Walker2d-v3-SAC
OpenDILabCommunity
2023-09-23T01:32:43Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "Walker2d-v3", "en", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-04-18T15:41:43Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - Walker2d-v3 benchmark_name: OpenAI/Gym/MuJoCo task_name: Walker2d-v3 pipeline_tag: reinforcement-learning model-index: - name: SAC results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/MuJoCo-Walker2d-v3 type: OpenAI/Gym/MuJoCo-Walker2d-v3 metrics: - type: mean_reward value: 5296.43 +/- 19.68 name: mean_reward --- # Play **Walker2d-v3** with **SAC** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **SAC** implementation to OpenAI/Gym/MuJoCo **Walker2d-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed sudo apt update -y && sudo apt install -y build-essential libgl1-mesa-dev libgl1-mesa-glx libglew-dev libosmesa6-dev libglfw3 libglfw3-dev libsdl2-dev libsdl2-image-dev libglm-dev libfreetype6-dev patchelf mkdir -p ~/.mujoco wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz tar -xf mujoco.tar.gz -C ~/.mujoco echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin pip3 install "cython<3" pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = SACAgent(env_id="Walker2d-v3", exp_name="Walker2d-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import SACAgent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Walker2d-v3-SAC") # Instantiate the agent agent = SACAgent(env_id="Walker2d-v3", exp_name="Walker2d-v3-SAC", cfg=cfg.exp_config, policy_state_dict=policy_state_dict) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import SACAgent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = SACAgent(env_id="Walker2d-v3", exp_name="Walker2d-v3-SAC") # Train the agent return_ = agent.train(step=int(5000000)) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/MuJoCo", task_name="Walker2d-v3", algo_name="SAC", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html", installation_guide=''' sudo apt update -y \ && sudo apt install -y \ build-essential \ libgl1-mesa-dev \ libgl1-mesa-glx \ libglew-dev \ libosmesa6-dev \ libglfw3 \ libglfw3-dev \ libsdl2-dev \ libsdl2-image-dev \ libglm-dev \ libfreetype6-dev \ patchelf mkdir -p ~/.mujoco wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz -O mujoco.tar.gz tar -xf mujoco.tar.gz -C ~/.mujoco echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin" >> ~/.bashrc export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mjpro210/bin:~/.mujoco/mujoco210/bin pip3 install "cython<3" pip3 install DI-engine[common_env] ''', usage_file_by_git_clone="./sac/walker2d_sac_deploy.py", usage_file_by_huggingface_ding="./sac/walker2d_sac_download.py", train_file="./sac/walker2d_sac.py", repo_id="OpenDILabCommunity/Walker2d-v3-SAC", create_repo=False ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 6000, 'n_evaluator_episode': 8, 'env_id': 'Walker2d-v3', 'collector_env_num': 1, 'evaluator_env_num': 8, 'env_wrapper': 'mujoco_default' }, 'policy': { 'model': { 'twin_critic': True, 'action_space': 'reparameterization', 'obs_shape': 17, 'action_shape': 6, 'actor_head_hidden_size': 256, 'critic_head_hidden_size': 256 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 1, 'batch_size': 256, 'learning_rate_q': 0.001, 'learning_rate_policy': 0.001, 'learning_rate_alpha': 0.0003, 'target_theta': 0.005, 'discount_factor': 0.99, 'alpha': 0.2, 'auto_alpha': False, 'log_space': True, 'target_entropy': None, 'ignore_done': False, 'init_w': 0.003, 'reparameterization': True }, 'collect': { 'collector': {}, 'n_sample': 1, 'unroll_len': 1, 'collector_logit': False }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'figure_path': None, 'cfg_type': 'InteractionSerialEvaluatorDict', 'stop_value': 6000, 'n_episode': 8 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 1000000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'sac', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': True, 'multi_agent': False, 'cfg_type': 'SACPolicyDict', 'command': {} }, 'exp_name': 'Walker2d-v3-SAC', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/Walker2d-v3-SAC) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/sac.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-SAC/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Walker2d-v3-SAC/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 1702.11 KB - **Last Update Date:** 2023-09-23 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/MuJoCo - **Task:** Walker2d-v3 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.9 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/mujoco.html)
CyberHarem/togawa_sakiko_bangdreamitsmygo
CyberHarem
2023-09-23T01:14:37Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/togawa_sakiko_bangdreamitsmygo", "license:mit", "region:us" ]
text-to-image
2023-09-02T15:31:21Z
--- license: mit datasets: - CyberHarem/togawa_sakiko_bangdreamitsmygo pipeline_tag: text-to-image tags: - art --- # Lora of togawa_sakiko_bangdreamitsmygo This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/togawa_sakiko_bangdreamitsmygo.pt` as the embedding and `4760/togawa_sakiko_bangdreamitsmygo.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.990. The trigger words are: 1. `togawa_sakiko_bangdreamitsmygo` 2. `long_hair, ribbon, hair_ribbon, bangs, blush, black_ribbon, yellow_eyes, grey_hair, sidelocks, portrait, closed_mouth, smile, indoors, blurry, blue_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.989 | [Download](5100/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | ![pattern_7-5100](5100/previews/pattern_7.png) | ![pattern_8-5100](5100/previews/pattern_8.png) | ![pattern_9-5100](5100/previews/pattern_9.png) | ![pattern_10-5100](5100/previews/pattern_10.png) | ![pattern_11-5100](5100/previews/pattern_11.png) | ![pattern_12-5100](5100/previews/pattern_12.png) | ![pattern_13-5100](5100/previews/pattern_13.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.990** | [**Download**](4760/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | ![pattern_6-4760](4760/previews/pattern_6.png) | ![pattern_7-4760](4760/previews/pattern_7.png) | ![pattern_8-4760](4760/previews/pattern_8.png) | ![pattern_9-4760](4760/previews/pattern_9.png) | ![pattern_10-4760](4760/previews/pattern_10.png) | ![pattern_11-4760](4760/previews/pattern_11.png) | ![pattern_12-4760](4760/previews/pattern_12.png) | ![pattern_13-4760](4760/previews/pattern_13.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.984 | [Download](4420/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | ![pattern_6-4420](4420/previews/pattern_6.png) | ![pattern_7-4420](4420/previews/pattern_7.png) | ![pattern_8-4420](4420/previews/pattern_8.png) | ![pattern_9-4420](4420/previews/pattern_9.png) | ![pattern_10-4420](4420/previews/pattern_10.png) | ![pattern_11-4420](4420/previews/pattern_11.png) | ![pattern_12-4420](4420/previews/pattern_12.png) | ![pattern_13-4420](4420/previews/pattern_13.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.989 | [Download](4080/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | ![pattern_7-4080](4080/previews/pattern_7.png) | ![pattern_8-4080](4080/previews/pattern_8.png) | ![pattern_9-4080](4080/previews/pattern_9.png) | ![pattern_10-4080](4080/previews/pattern_10.png) | ![pattern_11-4080](4080/previews/pattern_11.png) | ![pattern_12-4080](4080/previews/pattern_12.png) | ![pattern_13-4080](4080/previews/pattern_13.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.986 | [Download](3740/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | ![pattern_6-3740](3740/previews/pattern_6.png) | ![pattern_7-3740](3740/previews/pattern_7.png) | ![pattern_8-3740](3740/previews/pattern_8.png) | ![pattern_9-3740](3740/previews/pattern_9.png) | ![pattern_10-3740](3740/previews/pattern_10.png) | ![pattern_11-3740](3740/previews/pattern_11.png) | ![pattern_12-3740](3740/previews/pattern_12.png) | ![pattern_13-3740](3740/previews/pattern_13.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.933 | [Download](3400/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | ![pattern_6-3400](3400/previews/pattern_6.png) | ![pattern_7-3400](3400/previews/pattern_7.png) | ![pattern_8-3400](3400/previews/pattern_8.png) | ![pattern_9-3400](3400/previews/pattern_9.png) | ![pattern_10-3400](3400/previews/pattern_10.png) | ![pattern_11-3400](3400/previews/pattern_11.png) | ![pattern_12-3400](3400/previews/pattern_12.png) | ![pattern_13-3400](3400/previews/pattern_13.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.979 | [Download](3060/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | ![pattern_7-3060](3060/previews/pattern_7.png) | ![pattern_8-3060](3060/previews/pattern_8.png) | ![pattern_9-3060](3060/previews/pattern_9.png) | ![pattern_10-3060](3060/previews/pattern_10.png) | ![pattern_11-3060](3060/previews/pattern_11.png) | ![pattern_12-3060](3060/previews/pattern_12.png) | ![pattern_13-3060](3060/previews/pattern_13.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.978 | [Download](2720/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | ![pattern_6-2720](2720/previews/pattern_6.png) | ![pattern_7-2720](2720/previews/pattern_7.png) | ![pattern_8-2720](2720/previews/pattern_8.png) | ![pattern_9-2720](2720/previews/pattern_9.png) | ![pattern_10-2720](2720/previews/pattern_10.png) | ![pattern_11-2720](2720/previews/pattern_11.png) | ![pattern_12-2720](2720/previews/pattern_12.png) | ![pattern_13-2720](2720/previews/pattern_13.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.964 | [Download](2380/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | ![pattern_6-2380](2380/previews/pattern_6.png) | ![pattern_7-2380](2380/previews/pattern_7.png) | ![pattern_8-2380](2380/previews/pattern_8.png) | ![pattern_9-2380](2380/previews/pattern_9.png) | ![pattern_10-2380](2380/previews/pattern_10.png) | ![pattern_11-2380](2380/previews/pattern_11.png) | ![pattern_12-2380](2380/previews/pattern_12.png) | ![pattern_13-2380](2380/previews/pattern_13.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.981 | [Download](2040/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | ![pattern_7-2040](2040/previews/pattern_7.png) | ![pattern_8-2040](2040/previews/pattern_8.png) | ![pattern_9-2040](2040/previews/pattern_9.png) | ![pattern_10-2040](2040/previews/pattern_10.png) | ![pattern_11-2040](2040/previews/pattern_11.png) | ![pattern_12-2040](2040/previews/pattern_12.png) | ![pattern_13-2040](2040/previews/pattern_13.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.982 | [Download](1700/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | ![pattern_6-1700](1700/previews/pattern_6.png) | ![pattern_7-1700](1700/previews/pattern_7.png) | ![pattern_8-1700](1700/previews/pattern_8.png) | ![pattern_9-1700](1700/previews/pattern_9.png) | ![pattern_10-1700](1700/previews/pattern_10.png) | ![pattern_11-1700](1700/previews/pattern_11.png) | ![pattern_12-1700](1700/previews/pattern_12.png) | ![pattern_13-1700](1700/previews/pattern_13.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.975 | [Download](1360/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | ![pattern_6-1360](1360/previews/pattern_6.png) | ![pattern_7-1360](1360/previews/pattern_7.png) | ![pattern_8-1360](1360/previews/pattern_8.png) | ![pattern_9-1360](1360/previews/pattern_9.png) | ![pattern_10-1360](1360/previews/pattern_10.png) | ![pattern_11-1360](1360/previews/pattern_11.png) | ![pattern_12-1360](1360/previews/pattern_12.png) | ![pattern_13-1360](1360/previews/pattern_13.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.958 | [Download](1020/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | ![pattern_7-1020](1020/previews/pattern_7.png) | ![pattern_8-1020](1020/previews/pattern_8.png) | ![pattern_9-1020](1020/previews/pattern_9.png) | ![pattern_10-1020](1020/previews/pattern_10.png) | ![pattern_11-1020](1020/previews/pattern_11.png) | ![pattern_12-1020](1020/previews/pattern_12.png) | ![pattern_13-1020](1020/previews/pattern_13.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.961 | [Download](680/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | ![pattern_6-680](680/previews/pattern_6.png) | ![pattern_7-680](680/previews/pattern_7.png) | ![pattern_8-680](680/previews/pattern_8.png) | ![pattern_9-680](680/previews/pattern_9.png) | ![pattern_10-680](680/previews/pattern_10.png) | ![pattern_11-680](680/previews/pattern_11.png) | ![pattern_12-680](680/previews/pattern_12.png) | ![pattern_13-680](680/previews/pattern_13.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.959 | [Download](340/togawa_sakiko_bangdreamitsmygo.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | ![pattern_6-340](340/previews/pattern_6.png) | ![pattern_7-340](340/previews/pattern_7.png) | ![pattern_8-340](340/previews/pattern_8.png) | ![pattern_9-340](340/previews/pattern_9.png) | ![pattern_10-340](340/previews/pattern_10.png) | ![pattern_11-340](340/previews/pattern_11.png) | ![pattern_12-340](340/previews/pattern_12.png) | ![pattern_13-340](340/previews/pattern_13.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
UrbanJoe/llama2-qlora-finetunined-french
UrbanJoe
2023-09-23T00:39:37Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-23T00:39:32Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
CyberHarem/shina_kuroko_ahogirl
CyberHarem
2023-09-23T00:18:04Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/shina_kuroko_ahogirl", "license:mit", "region:us" ]
text-to-image
2023-08-31T09:35:22Z
--- license: mit datasets: - CyberHarem/shina_kuroko_ahogirl pipeline_tag: text-to-image tags: - art --- # Lora of shina_kuroko_ahogirl This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 2600, you need to download `2600/shina_kuroko_ahogirl.pt` as the embedding and `2600/shina_kuroko_ahogirl.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 2600**, with the score of 0.991. The trigger words are: 1. `shina_kuroko_ahogirl` 2. `long_hair, black_hair, brown_eyes, sweatdrop` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 3000 | 0.983 | [Download](3000/shina_kuroko_ahogirl.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![pattern_4-3000](3000/previews/pattern_4.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2800 | 0.979 | [Download](2800/shina_kuroko_ahogirl.zip) | ![pattern_1-2800](2800/previews/pattern_1.png) | ![pattern_2-2800](2800/previews/pattern_2.png) | ![pattern_3-2800](2800/previews/pattern_3.png) | ![pattern_4-2800](2800/previews/pattern_4.png) | ![bikini-2800](2800/previews/bikini.png) | [<NSFW, click to see>](2800/previews/bondage.png) | ![free-2800](2800/previews/free.png) | ![maid-2800](2800/previews/maid.png) | ![miko-2800](2800/previews/miko.png) | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) | ![suit-2800](2800/previews/suit.png) | ![yukata-2800](2800/previews/yukata.png) | | **2600** | **0.991** | [**Download**](2600/shina_kuroko_ahogirl.zip) | ![pattern_1-2600](2600/previews/pattern_1.png) | ![pattern_2-2600](2600/previews/pattern_2.png) | ![pattern_3-2600](2600/previews/pattern_3.png) | ![pattern_4-2600](2600/previews/pattern_4.png) | ![bikini-2600](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) | ![free-2600](2600/previews/free.png) | ![maid-2600](2600/previews/maid.png) | ![miko-2600](2600/previews/miko.png) | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) | ![suit-2600](2600/previews/suit.png) | ![yukata-2600](2600/previews/yukata.png) | | 2400 | 0.981 | [Download](2400/shina_kuroko_ahogirl.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 2200 | 0.986 | [Download](2200/shina_kuroko_ahogirl.zip) | ![pattern_1-2200](2200/previews/pattern_1.png) | ![pattern_2-2200](2200/previews/pattern_2.png) | ![pattern_3-2200](2200/previews/pattern_3.png) | ![pattern_4-2200](2200/previews/pattern_4.png) | ![bikini-2200](2200/previews/bikini.png) | [<NSFW, click to see>](2200/previews/bondage.png) | ![free-2200](2200/previews/free.png) | ![maid-2200](2200/previews/maid.png) | ![miko-2200](2200/previews/miko.png) | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) | ![suit-2200](2200/previews/suit.png) | ![yukata-2200](2200/previews/yukata.png) | | 2000 | 0.977 | [Download](2000/shina_kuroko_ahogirl.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![pattern_3-2000](2000/previews/pattern_3.png) | ![pattern_4-2000](2000/previews/pattern_4.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | ![free-2000](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) | | 1800 | 0.984 | [Download](1800/shina_kuroko_ahogirl.zip) | ![pattern_1-1800](1800/previews/pattern_1.png) | ![pattern_2-1800](1800/previews/pattern_2.png) | ![pattern_3-1800](1800/previews/pattern_3.png) | ![pattern_4-1800](1800/previews/pattern_4.png) | ![bikini-1800](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | ![free-1800](1800/previews/free.png) | ![maid-1800](1800/previews/maid.png) | ![miko-1800](1800/previews/miko.png) | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) | ![suit-1800](1800/previews/suit.png) | ![yukata-1800](1800/previews/yukata.png) | | 1600 | 0.975 | [Download](1600/shina_kuroko_ahogirl.zip) | ![pattern_1-1600](1600/previews/pattern_1.png) | ![pattern_2-1600](1600/previews/pattern_2.png) | ![pattern_3-1600](1600/previews/pattern_3.png) | ![pattern_4-1600](1600/previews/pattern_4.png) | ![bikini-1600](1600/previews/bikini.png) | [<NSFW, click to see>](1600/previews/bondage.png) | ![free-1600](1600/previews/free.png) | ![maid-1600](1600/previews/maid.png) | ![miko-1600](1600/previews/miko.png) | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) | ![suit-1600](1600/previews/suit.png) | ![yukata-1600](1600/previews/yukata.png) | | 1400 | 0.979 | [Download](1400/shina_kuroko_ahogirl.zip) | ![pattern_1-1400](1400/previews/pattern_1.png) | ![pattern_2-1400](1400/previews/pattern_2.png) | ![pattern_3-1400](1400/previews/pattern_3.png) | ![pattern_4-1400](1400/previews/pattern_4.png) | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/bondage.png) | ![free-1400](1400/previews/free.png) | ![maid-1400](1400/previews/maid.png) | ![miko-1400](1400/previews/miko.png) | [<NSFW, click to see>](1400/previews/nude.png) | [<NSFW, click to see>](1400/previews/nude2.png) | ![suit-1400](1400/previews/suit.png) | ![yukata-1400](1400/previews/yukata.png) | | 1200 | 0.984 | [Download](1200/shina_kuroko_ahogirl.zip) | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![pattern_3-1200](1200/previews/pattern_3.png) | ![pattern_4-1200](1200/previews/pattern_4.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/bondage.png) | ![free-1200](1200/previews/free.png) | ![maid-1200](1200/previews/maid.png) | ![miko-1200](1200/previews/miko.png) | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) | ![suit-1200](1200/previews/suit.png) | ![yukata-1200](1200/previews/yukata.png) | | 1000 | 0.912 | [Download](1000/shina_kuroko_ahogirl.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![pattern_4-1000](1000/previews/pattern_4.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | ![free-1000](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) | | 800 | 0.962 | [Download](800/shina_kuroko_ahogirl.zip) | ![pattern_1-800](800/previews/pattern_1.png) | ![pattern_2-800](800/previews/pattern_2.png) | ![pattern_3-800](800/previews/pattern_3.png) | ![pattern_4-800](800/previews/pattern_4.png) | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/bondage.png) | ![free-800](800/previews/free.png) | ![maid-800](800/previews/maid.png) | ![miko-800](800/previews/miko.png) | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) | ![suit-800](800/previews/suit.png) | ![yukata-800](800/previews/yukata.png) | | 600 | 0.957 | [Download](600/shina_kuroko_ahogirl.zip) | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![pattern_3-600](600/previews/pattern_3.png) | ![pattern_4-600](600/previews/pattern_4.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/bondage.png) | ![free-600](600/previews/free.png) | ![maid-600](600/previews/maid.png) | ![miko-600](600/previews/miko.png) | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) | ![suit-600](600/previews/suit.png) | ![yukata-600](600/previews/yukata.png) | | 400 | 0.367 | [Download](400/shina_kuroko_ahogirl.zip) | ![pattern_1-400](400/previews/pattern_1.png) | ![pattern_2-400](400/previews/pattern_2.png) | ![pattern_3-400](400/previews/pattern_3.png) | ![pattern_4-400](400/previews/pattern_4.png) | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/bondage.png) | ![free-400](400/previews/free.png) | ![maid-400](400/previews/maid.png) | ![miko-400](400/previews/miko.png) | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) | ![suit-400](400/previews/suit.png) | ![yukata-400](400/previews/yukata.png) | | 200 | 0.161 | [Download](200/shina_kuroko_ahogirl.zip) | ![pattern_1-200](200/previews/pattern_1.png) | ![pattern_2-200](200/previews/pattern_2.png) | ![pattern_3-200](200/previews/pattern_3.png) | ![pattern_4-200](200/previews/pattern_4.png) | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/bondage.png) | ![free-200](200/previews/free.png) | ![maid-200](200/previews/maid.png) | ![miko-200](200/previews/miko.png) | [<NSFW, click to see>](200/previews/nude.png) | [<NSFW, click to see>](200/previews/nude2.png) | ![suit-200](200/previews/suit.png) | ![yukata-200](200/previews/yukata.png) |
CyberHarem/hiragi_kii_ahogirl
CyberHarem
2023-09-23T00:14:10Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/hiragi_kii_ahogirl", "license:mit", "region:us" ]
text-to-image
2023-08-31T08:58:56Z
--- license: mit datasets: - CyberHarem/hiragi_kii_ahogirl pipeline_tag: text-to-image tags: - art --- # Lora of hiragi_kii_ahogirl This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 2400, you need to download `2400/hiragi_kii_ahogirl.pt` as the embedding and `2400/hiragi_kii_ahogirl.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 2400**, with the score of 0.978. The trigger words are: 1. `hiragi_kii_ahogirl` 2. `blonde_hair, short_hair, green_eyes, multiple_girls, 2girls, open_mouth, sweatdrop` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 3000 | 0.972 | [Download](3000/hiragi_kii_ahogirl.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2800 | 0.894 | [Download](2800/hiragi_kii_ahogirl.zip) | ![pattern_1-2800](2800/previews/pattern_1.png) | ![pattern_2-2800](2800/previews/pattern_2.png) | ![bikini-2800](2800/previews/bikini.png) | [<NSFW, click to see>](2800/previews/bondage.png) | ![free-2800](2800/previews/free.png) | ![maid-2800](2800/previews/maid.png) | ![miko-2800](2800/previews/miko.png) | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) | ![suit-2800](2800/previews/suit.png) | ![yukata-2800](2800/previews/yukata.png) | | 2600 | 0.885 | [Download](2600/hiragi_kii_ahogirl.zip) | ![pattern_1-2600](2600/previews/pattern_1.png) | ![pattern_2-2600](2600/previews/pattern_2.png) | ![bikini-2600](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) | ![free-2600](2600/previews/free.png) | ![maid-2600](2600/previews/maid.png) | ![miko-2600](2600/previews/miko.png) | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) | ![suit-2600](2600/previews/suit.png) | ![yukata-2600](2600/previews/yukata.png) | | **2400** | **0.978** | [**Download**](2400/hiragi_kii_ahogirl.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 2200 | 0.975 | [Download](2200/hiragi_kii_ahogirl.zip) | ![pattern_1-2200](2200/previews/pattern_1.png) | ![pattern_2-2200](2200/previews/pattern_2.png) | ![bikini-2200](2200/previews/bikini.png) | [<NSFW, click to see>](2200/previews/bondage.png) | ![free-2200](2200/previews/free.png) | ![maid-2200](2200/previews/maid.png) | ![miko-2200](2200/previews/miko.png) | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) | ![suit-2200](2200/previews/suit.png) | ![yukata-2200](2200/previews/yukata.png) | | 2000 | 0.847 | [Download](2000/hiragi_kii_ahogirl.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | ![free-2000](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) | | 1800 | 0.951 | [Download](1800/hiragi_kii_ahogirl.zip) | ![pattern_1-1800](1800/previews/pattern_1.png) | ![pattern_2-1800](1800/previews/pattern_2.png) | ![bikini-1800](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | ![free-1800](1800/previews/free.png) | ![maid-1800](1800/previews/maid.png) | ![miko-1800](1800/previews/miko.png) | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) | ![suit-1800](1800/previews/suit.png) | ![yukata-1800](1800/previews/yukata.png) | | 1600 | 0.908 | [Download](1600/hiragi_kii_ahogirl.zip) | ![pattern_1-1600](1600/previews/pattern_1.png) | ![pattern_2-1600](1600/previews/pattern_2.png) | ![bikini-1600](1600/previews/bikini.png) | [<NSFW, click to see>](1600/previews/bondage.png) | ![free-1600](1600/previews/free.png) | ![maid-1600](1600/previews/maid.png) | ![miko-1600](1600/previews/miko.png) | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) | ![suit-1600](1600/previews/suit.png) | ![yukata-1600](1600/previews/yukata.png) | | 1400 | 0.918 | [Download](1400/hiragi_kii_ahogirl.zip) | ![pattern_1-1400](1400/previews/pattern_1.png) | ![pattern_2-1400](1400/previews/pattern_2.png) | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/bondage.png) | ![free-1400](1400/previews/free.png) | ![maid-1400](1400/previews/maid.png) | ![miko-1400](1400/previews/miko.png) | [<NSFW, click to see>](1400/previews/nude.png) | [<NSFW, click to see>](1400/previews/nude2.png) | ![suit-1400](1400/previews/suit.png) | ![yukata-1400](1400/previews/yukata.png) | | 1200 | 0.889 | [Download](1200/hiragi_kii_ahogirl.zip) | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/bondage.png) | ![free-1200](1200/previews/free.png) | ![maid-1200](1200/previews/maid.png) | ![miko-1200](1200/previews/miko.png) | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) | ![suit-1200](1200/previews/suit.png) | ![yukata-1200](1200/previews/yukata.png) | | 1000 | 0.846 | [Download](1000/hiragi_kii_ahogirl.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | ![free-1000](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) | | 800 | 0.764 | [Download](800/hiragi_kii_ahogirl.zip) | ![pattern_1-800](800/previews/pattern_1.png) | ![pattern_2-800](800/previews/pattern_2.png) | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/bondage.png) | ![free-800](800/previews/free.png) | ![maid-800](800/previews/maid.png) | ![miko-800](800/previews/miko.png) | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) | ![suit-800](800/previews/suit.png) | ![yukata-800](800/previews/yukata.png) | | 600 | 0.772 | [Download](600/hiragi_kii_ahogirl.zip) | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/bondage.png) | ![free-600](600/previews/free.png) | ![maid-600](600/previews/maid.png) | ![miko-600](600/previews/miko.png) | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) | ![suit-600](600/previews/suit.png) | ![yukata-600](600/previews/yukata.png) | | 400 | 0.744 | [Download](400/hiragi_kii_ahogirl.zip) | ![pattern_1-400](400/previews/pattern_1.png) | ![pattern_2-400](400/previews/pattern_2.png) | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/bondage.png) | ![free-400](400/previews/free.png) | ![maid-400](400/previews/maid.png) | ![miko-400](400/previews/miko.png) | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) | ![suit-400](400/previews/suit.png) | ![yukata-400](400/previews/yukata.png) | | 200 | 0.189 | [Download](200/hiragi_kii_ahogirl.zip) | ![pattern_1-200](200/previews/pattern_1.png) | ![pattern_2-200](200/previews/pattern_2.png) | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/bondage.png) | ![free-200](200/previews/free.png) | ![maid-200](200/previews/maid.png) | ![miko-200](200/previews/miko.png) | [<NSFW, click to see>](200/previews/nude.png) | [<NSFW, click to see>](200/previews/nude2.png) | ![suit-200](200/previews/suit.png) | ![yukata-200](200/previews/yukata.png) |
umm-maybe/Skip-NoClip-StarCoder-1B
umm-maybe
2023-09-23T00:05:44Z
127
0
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-22T14:08:53Z
--- license: bigcode-openrail-m ---
simlamkr1/Llama2-simgpt2test
simlamkr1
2023-09-23T00:01:15Z
49
0
transformers
[ "transformers", "pytorch", "tensorboard", "mixformer-sequential", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2023-09-07T14:47:56Z
--- tags: - generated_from_trainer model-index: - name: Llama2-simgpt2test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama2-simgpt2test This model is a fine-tuned version of [simlamkr1/llama2_finetuned_chatbot](https://huggingface.co/simlamkr1/llama2_finetuned_chatbot) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Undi95/MXLewdMini-L2-13B-GGUF
Undi95
2023-09-22T23:56:31Z
100
4
null
[ "gguf", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2023-09-22T23:30:21Z
--- license: cc-by-nc-4.0 --- Merge: ```shell [Xwin (0.66) + ReMM (0.33)] x [Xwin (0.33) + MLewd (0.66)] ``` The goal was to recreate https://huggingface.co/Undi95/MXLewd-L2-20B-GGUF in 13B without using merge interlacing (will probably be a little less good). <!-- description start --> ## Models used - Undi95/MLewd-L2-13B-v2-3 - Undi95/ReMM-v2.1-L2-13B - Xwin-LM/Xwin-LM-13B-V0.1 <!-- description end --> One part is ReMM (0.33) and Xwin (0.66) One part is Xwin (0.33) and MLewd (0.66) ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that completes the request. ### Instruction: {prompt} ### Response: ```
trieudemo11/llama_7b_attrb_cate_4m_10
trieudemo11
2023-09-22T23:49:23Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-22T23:49:06Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0 - PEFT 0.6.0.dev0
CyberHarem/otosuna_mihari_mangakasantoassistantsanto
CyberHarem
2023-09-22T23:31:10Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/otosuna_mihari_mangakasantoassistantsanto", "license:mit", "region:us" ]
text-to-image
2023-08-30T21:25:22Z
--- license: mit datasets: - CyberHarem/otosuna_mihari_mangakasantoassistantsanto pipeline_tag: text-to-image tags: - art --- # Lora of otosuna_mihari_mangakasantoassistantsanto This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 5060, you need to download `5060/otosuna_mihari_mangakasantoassistantsanto.pt` as the embedding and `5060/otosuna_mihari_mangakasantoassistantsanto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 5060**, with the score of 0.803. The trigger words are: 1. `otosuna_mihari_mangakasantoassistantsanto` 2. `blonde_hair, short_hair, open_mouth, brown_eyes, red_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6900 | 0.748 | [Download](6900/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-6900](6900/previews/pattern_1.png) | ![pattern_2-6900](6900/previews/pattern_2.png) | ![pattern_3-6900](6900/previews/pattern_3.png) | ![pattern_4-6900](6900/previews/pattern_4.png) | ![pattern_5-6900](6900/previews/pattern_5.png) | ![pattern_6-6900](6900/previews/pattern_6.png) | ![pattern_7-6900](6900/previews/pattern_7.png) | ![pattern_8-6900](6900/previews/pattern_8.png) | ![pattern_9-6900](6900/previews/pattern_9.png) | ![pattern_10-6900](6900/previews/pattern_10.png) | ![pattern_11-6900](6900/previews/pattern_11.png) | ![pattern_12-6900](6900/previews/pattern_12.png) | [<NSFW, click to see>](6900/previews/pattern_13.png) | [<NSFW, click to see>](6900/previews/bikini.png) | [<NSFW, click to see>](6900/previews/bondage.png) | ![free-6900](6900/previews/free.png) | ![maid-6900](6900/previews/maid.png) | ![miko-6900](6900/previews/miko.png) | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) | ![suit-6900](6900/previews/suit.png) | ![yukata-6900](6900/previews/yukata.png) | | 6440 | 0.714 | [Download](6440/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-6440](6440/previews/pattern_1.png) | ![pattern_2-6440](6440/previews/pattern_2.png) | ![pattern_3-6440](6440/previews/pattern_3.png) | ![pattern_4-6440](6440/previews/pattern_4.png) | ![pattern_5-6440](6440/previews/pattern_5.png) | ![pattern_6-6440](6440/previews/pattern_6.png) | ![pattern_7-6440](6440/previews/pattern_7.png) | ![pattern_8-6440](6440/previews/pattern_8.png) | ![pattern_9-6440](6440/previews/pattern_9.png) | ![pattern_10-6440](6440/previews/pattern_10.png) | ![pattern_11-6440](6440/previews/pattern_11.png) | ![pattern_12-6440](6440/previews/pattern_12.png) | [<NSFW, click to see>](6440/previews/pattern_13.png) | [<NSFW, click to see>](6440/previews/bikini.png) | [<NSFW, click to see>](6440/previews/bondage.png) | ![free-6440](6440/previews/free.png) | ![maid-6440](6440/previews/maid.png) | ![miko-6440](6440/previews/miko.png) | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) | ![suit-6440](6440/previews/suit.png) | ![yukata-6440](6440/previews/yukata.png) | | 5980 | 0.718 | [Download](5980/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-5980](5980/previews/pattern_1.png) | ![pattern_2-5980](5980/previews/pattern_2.png) | ![pattern_3-5980](5980/previews/pattern_3.png) | ![pattern_4-5980](5980/previews/pattern_4.png) | ![pattern_5-5980](5980/previews/pattern_5.png) | ![pattern_6-5980](5980/previews/pattern_6.png) | ![pattern_7-5980](5980/previews/pattern_7.png) | ![pattern_8-5980](5980/previews/pattern_8.png) | ![pattern_9-5980](5980/previews/pattern_9.png) | ![pattern_10-5980](5980/previews/pattern_10.png) | ![pattern_11-5980](5980/previews/pattern_11.png) | ![pattern_12-5980](5980/previews/pattern_12.png) | [<NSFW, click to see>](5980/previews/pattern_13.png) | [<NSFW, click to see>](5980/previews/bikini.png) | [<NSFW, click to see>](5980/previews/bondage.png) | ![free-5980](5980/previews/free.png) | ![maid-5980](5980/previews/maid.png) | ![miko-5980](5980/previews/miko.png) | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) | ![suit-5980](5980/previews/suit.png) | ![yukata-5980](5980/previews/yukata.png) | | 5520 | 0.749 | [Download](5520/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-5520](5520/previews/pattern_1.png) | ![pattern_2-5520](5520/previews/pattern_2.png) | ![pattern_3-5520](5520/previews/pattern_3.png) | ![pattern_4-5520](5520/previews/pattern_4.png) | ![pattern_5-5520](5520/previews/pattern_5.png) | ![pattern_6-5520](5520/previews/pattern_6.png) | ![pattern_7-5520](5520/previews/pattern_7.png) | ![pattern_8-5520](5520/previews/pattern_8.png) | ![pattern_9-5520](5520/previews/pattern_9.png) | ![pattern_10-5520](5520/previews/pattern_10.png) | ![pattern_11-5520](5520/previews/pattern_11.png) | ![pattern_12-5520](5520/previews/pattern_12.png) | [<NSFW, click to see>](5520/previews/pattern_13.png) | [<NSFW, click to see>](5520/previews/bikini.png) | [<NSFW, click to see>](5520/previews/bondage.png) | ![free-5520](5520/previews/free.png) | ![maid-5520](5520/previews/maid.png) | ![miko-5520](5520/previews/miko.png) | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) | ![suit-5520](5520/previews/suit.png) | ![yukata-5520](5520/previews/yukata.png) | | **5060** | **0.803** | [**Download**](5060/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-5060](5060/previews/pattern_1.png) | ![pattern_2-5060](5060/previews/pattern_2.png) | ![pattern_3-5060](5060/previews/pattern_3.png) | ![pattern_4-5060](5060/previews/pattern_4.png) | ![pattern_5-5060](5060/previews/pattern_5.png) | ![pattern_6-5060](5060/previews/pattern_6.png) | ![pattern_7-5060](5060/previews/pattern_7.png) | ![pattern_8-5060](5060/previews/pattern_8.png) | ![pattern_9-5060](5060/previews/pattern_9.png) | ![pattern_10-5060](5060/previews/pattern_10.png) | ![pattern_11-5060](5060/previews/pattern_11.png) | ![pattern_12-5060](5060/previews/pattern_12.png) | [<NSFW, click to see>](5060/previews/pattern_13.png) | [<NSFW, click to see>](5060/previews/bikini.png) | [<NSFW, click to see>](5060/previews/bondage.png) | ![free-5060](5060/previews/free.png) | ![maid-5060](5060/previews/maid.png) | ![miko-5060](5060/previews/miko.png) | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) | ![suit-5060](5060/previews/suit.png) | ![yukata-5060](5060/previews/yukata.png) | | 4600 | 0.790 | [Download](4600/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-4600](4600/previews/pattern_1.png) | ![pattern_2-4600](4600/previews/pattern_2.png) | ![pattern_3-4600](4600/previews/pattern_3.png) | ![pattern_4-4600](4600/previews/pattern_4.png) | ![pattern_5-4600](4600/previews/pattern_5.png) | ![pattern_6-4600](4600/previews/pattern_6.png) | ![pattern_7-4600](4600/previews/pattern_7.png) | ![pattern_8-4600](4600/previews/pattern_8.png) | ![pattern_9-4600](4600/previews/pattern_9.png) | ![pattern_10-4600](4600/previews/pattern_10.png) | ![pattern_11-4600](4600/previews/pattern_11.png) | ![pattern_12-4600](4600/previews/pattern_12.png) | [<NSFW, click to see>](4600/previews/pattern_13.png) | [<NSFW, click to see>](4600/previews/bikini.png) | [<NSFW, click to see>](4600/previews/bondage.png) | ![free-4600](4600/previews/free.png) | ![maid-4600](4600/previews/maid.png) | ![miko-4600](4600/previews/miko.png) | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) | ![suit-4600](4600/previews/suit.png) | ![yukata-4600](4600/previews/yukata.png) | | 4140 | 0.724 | [Download](4140/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-4140](4140/previews/pattern_1.png) | ![pattern_2-4140](4140/previews/pattern_2.png) | ![pattern_3-4140](4140/previews/pattern_3.png) | ![pattern_4-4140](4140/previews/pattern_4.png) | ![pattern_5-4140](4140/previews/pattern_5.png) | ![pattern_6-4140](4140/previews/pattern_6.png) | ![pattern_7-4140](4140/previews/pattern_7.png) | ![pattern_8-4140](4140/previews/pattern_8.png) | ![pattern_9-4140](4140/previews/pattern_9.png) | ![pattern_10-4140](4140/previews/pattern_10.png) | ![pattern_11-4140](4140/previews/pattern_11.png) | ![pattern_12-4140](4140/previews/pattern_12.png) | [<NSFW, click to see>](4140/previews/pattern_13.png) | [<NSFW, click to see>](4140/previews/bikini.png) | [<NSFW, click to see>](4140/previews/bondage.png) | ![free-4140](4140/previews/free.png) | ![maid-4140](4140/previews/maid.png) | ![miko-4140](4140/previews/miko.png) | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) | ![suit-4140](4140/previews/suit.png) | ![yukata-4140](4140/previews/yukata.png) | | 3680 | 0.735 | [Download](3680/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-3680](3680/previews/pattern_1.png) | ![pattern_2-3680](3680/previews/pattern_2.png) | ![pattern_3-3680](3680/previews/pattern_3.png) | ![pattern_4-3680](3680/previews/pattern_4.png) | ![pattern_5-3680](3680/previews/pattern_5.png) | ![pattern_6-3680](3680/previews/pattern_6.png) | ![pattern_7-3680](3680/previews/pattern_7.png) | ![pattern_8-3680](3680/previews/pattern_8.png) | ![pattern_9-3680](3680/previews/pattern_9.png) | ![pattern_10-3680](3680/previews/pattern_10.png) | ![pattern_11-3680](3680/previews/pattern_11.png) | ![pattern_12-3680](3680/previews/pattern_12.png) | [<NSFW, click to see>](3680/previews/pattern_13.png) | [<NSFW, click to see>](3680/previews/bikini.png) | [<NSFW, click to see>](3680/previews/bondage.png) | ![free-3680](3680/previews/free.png) | ![maid-3680](3680/previews/maid.png) | ![miko-3680](3680/previews/miko.png) | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) | ![suit-3680](3680/previews/suit.png) | ![yukata-3680](3680/previews/yukata.png) | | 3220 | 0.697 | [Download](3220/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-3220](3220/previews/pattern_1.png) | ![pattern_2-3220](3220/previews/pattern_2.png) | ![pattern_3-3220](3220/previews/pattern_3.png) | ![pattern_4-3220](3220/previews/pattern_4.png) | ![pattern_5-3220](3220/previews/pattern_5.png) | ![pattern_6-3220](3220/previews/pattern_6.png) | ![pattern_7-3220](3220/previews/pattern_7.png) | ![pattern_8-3220](3220/previews/pattern_8.png) | ![pattern_9-3220](3220/previews/pattern_9.png) | ![pattern_10-3220](3220/previews/pattern_10.png) | ![pattern_11-3220](3220/previews/pattern_11.png) | ![pattern_12-3220](3220/previews/pattern_12.png) | [<NSFW, click to see>](3220/previews/pattern_13.png) | [<NSFW, click to see>](3220/previews/bikini.png) | [<NSFW, click to see>](3220/previews/bondage.png) | ![free-3220](3220/previews/free.png) | ![maid-3220](3220/previews/maid.png) | ![miko-3220](3220/previews/miko.png) | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) | ![suit-3220](3220/previews/suit.png) | ![yukata-3220](3220/previews/yukata.png) | | 2760 | 0.673 | [Download](2760/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-2760](2760/previews/pattern_1.png) | ![pattern_2-2760](2760/previews/pattern_2.png) | ![pattern_3-2760](2760/previews/pattern_3.png) | ![pattern_4-2760](2760/previews/pattern_4.png) | ![pattern_5-2760](2760/previews/pattern_5.png) | ![pattern_6-2760](2760/previews/pattern_6.png) | ![pattern_7-2760](2760/previews/pattern_7.png) | ![pattern_8-2760](2760/previews/pattern_8.png) | ![pattern_9-2760](2760/previews/pattern_9.png) | ![pattern_10-2760](2760/previews/pattern_10.png) | ![pattern_11-2760](2760/previews/pattern_11.png) | ![pattern_12-2760](2760/previews/pattern_12.png) | [<NSFW, click to see>](2760/previews/pattern_13.png) | [<NSFW, click to see>](2760/previews/bikini.png) | [<NSFW, click to see>](2760/previews/bondage.png) | ![free-2760](2760/previews/free.png) | ![maid-2760](2760/previews/maid.png) | ![miko-2760](2760/previews/miko.png) | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) | ![suit-2760](2760/previews/suit.png) | ![yukata-2760](2760/previews/yukata.png) | | 2300 | 0.556 | [Download](2300/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-2300](2300/previews/pattern_1.png) | ![pattern_2-2300](2300/previews/pattern_2.png) | ![pattern_3-2300](2300/previews/pattern_3.png) | ![pattern_4-2300](2300/previews/pattern_4.png) | ![pattern_5-2300](2300/previews/pattern_5.png) | ![pattern_6-2300](2300/previews/pattern_6.png) | ![pattern_7-2300](2300/previews/pattern_7.png) | ![pattern_8-2300](2300/previews/pattern_8.png) | ![pattern_9-2300](2300/previews/pattern_9.png) | ![pattern_10-2300](2300/previews/pattern_10.png) | ![pattern_11-2300](2300/previews/pattern_11.png) | ![pattern_12-2300](2300/previews/pattern_12.png) | [<NSFW, click to see>](2300/previews/pattern_13.png) | [<NSFW, click to see>](2300/previews/bikini.png) | [<NSFW, click to see>](2300/previews/bondage.png) | ![free-2300](2300/previews/free.png) | ![maid-2300](2300/previews/maid.png) | ![miko-2300](2300/previews/miko.png) | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) | ![suit-2300](2300/previews/suit.png) | ![yukata-2300](2300/previews/yukata.png) | | 1840 | 0.493 | [Download](1840/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-1840](1840/previews/pattern_1.png) | ![pattern_2-1840](1840/previews/pattern_2.png) | ![pattern_3-1840](1840/previews/pattern_3.png) | ![pattern_4-1840](1840/previews/pattern_4.png) | ![pattern_5-1840](1840/previews/pattern_5.png) | ![pattern_6-1840](1840/previews/pattern_6.png) | ![pattern_7-1840](1840/previews/pattern_7.png) | ![pattern_8-1840](1840/previews/pattern_8.png) | ![pattern_9-1840](1840/previews/pattern_9.png) | ![pattern_10-1840](1840/previews/pattern_10.png) | ![pattern_11-1840](1840/previews/pattern_11.png) | ![pattern_12-1840](1840/previews/pattern_12.png) | [<NSFW, click to see>](1840/previews/pattern_13.png) | [<NSFW, click to see>](1840/previews/bikini.png) | [<NSFW, click to see>](1840/previews/bondage.png) | ![free-1840](1840/previews/free.png) | ![maid-1840](1840/previews/maid.png) | ![miko-1840](1840/previews/miko.png) | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) | ![suit-1840](1840/previews/suit.png) | ![yukata-1840](1840/previews/yukata.png) | | 1380 | 0.475 | [Download](1380/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-1380](1380/previews/pattern_1.png) | ![pattern_2-1380](1380/previews/pattern_2.png) | ![pattern_3-1380](1380/previews/pattern_3.png) | ![pattern_4-1380](1380/previews/pattern_4.png) | ![pattern_5-1380](1380/previews/pattern_5.png) | ![pattern_6-1380](1380/previews/pattern_6.png) | ![pattern_7-1380](1380/previews/pattern_7.png) | ![pattern_8-1380](1380/previews/pattern_8.png) | ![pattern_9-1380](1380/previews/pattern_9.png) | ![pattern_10-1380](1380/previews/pattern_10.png) | ![pattern_11-1380](1380/previews/pattern_11.png) | ![pattern_12-1380](1380/previews/pattern_12.png) | [<NSFW, click to see>](1380/previews/pattern_13.png) | [<NSFW, click to see>](1380/previews/bikini.png) | [<NSFW, click to see>](1380/previews/bondage.png) | ![free-1380](1380/previews/free.png) | ![maid-1380](1380/previews/maid.png) | ![miko-1380](1380/previews/miko.png) | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) | ![suit-1380](1380/previews/suit.png) | ![yukata-1380](1380/previews/yukata.png) | | 920 | 0.498 | [Download](920/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-920](920/previews/pattern_1.png) | ![pattern_2-920](920/previews/pattern_2.png) | ![pattern_3-920](920/previews/pattern_3.png) | ![pattern_4-920](920/previews/pattern_4.png) | ![pattern_5-920](920/previews/pattern_5.png) | ![pattern_6-920](920/previews/pattern_6.png) | ![pattern_7-920](920/previews/pattern_7.png) | ![pattern_8-920](920/previews/pattern_8.png) | ![pattern_9-920](920/previews/pattern_9.png) | ![pattern_10-920](920/previews/pattern_10.png) | ![pattern_11-920](920/previews/pattern_11.png) | ![pattern_12-920](920/previews/pattern_12.png) | [<NSFW, click to see>](920/previews/pattern_13.png) | [<NSFW, click to see>](920/previews/bikini.png) | [<NSFW, click to see>](920/previews/bondage.png) | ![free-920](920/previews/free.png) | ![maid-920](920/previews/maid.png) | ![miko-920](920/previews/miko.png) | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) | ![suit-920](920/previews/suit.png) | ![yukata-920](920/previews/yukata.png) | | 460 | 0.241 | [Download](460/otosuna_mihari_mangakasantoassistantsanto.zip) | ![pattern_1-460](460/previews/pattern_1.png) | ![pattern_2-460](460/previews/pattern_2.png) | ![pattern_3-460](460/previews/pattern_3.png) | ![pattern_4-460](460/previews/pattern_4.png) | ![pattern_5-460](460/previews/pattern_5.png) | ![pattern_6-460](460/previews/pattern_6.png) | ![pattern_7-460](460/previews/pattern_7.png) | ![pattern_8-460](460/previews/pattern_8.png) | ![pattern_9-460](460/previews/pattern_9.png) | ![pattern_10-460](460/previews/pattern_10.png) | ![pattern_11-460](460/previews/pattern_11.png) | ![pattern_12-460](460/previews/pattern_12.png) | [<NSFW, click to see>](460/previews/pattern_13.png) | [<NSFW, click to see>](460/previews/bikini.png) | [<NSFW, click to see>](460/previews/bondage.png) | ![free-460](460/previews/free.png) | ![maid-460](460/previews/maid.png) | ![miko-460](460/previews/miko.png) | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) | ![suit-460](460/previews/suit.png) | ![yukata-460](460/previews/yukata.png) |
MarcosMunoz95/LunarLander-v2
MarcosMunoz95
2023-09-22T23:27:51Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T23:25:29Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -201.31 +/- 134.63 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters
CyberHarem/ashisu_sahoto_mangakasantoassistantsanto
CyberHarem
2023-09-22T23:24:41Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/ashisu_sahoto_mangakasantoassistantsanto", "license:mit", "region:us" ]
text-to-image
2023-08-30T20:35:17Z
--- license: mit datasets: - CyberHarem/ashisu_sahoto_mangakasantoassistantsanto pipeline_tag: text-to-image tags: - art --- # Lora of ashisu_sahoto_mangakasantoassistantsanto This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 5880, you need to download `5880/ashisu_sahoto_mangakasantoassistantsanto.pt` as the embedding and `5880/ashisu_sahoto_mangakasantoassistantsanto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 5880**, with the score of 0.853. The trigger words are: 1. `ashisu_sahoto_mangakasantoassistantsanto` 2. `long_hair, black_hair, brown_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6300 | 0.847 | [Download](6300/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-6300](6300/previews/pattern_1.png) | ![pattern_2-6300](6300/previews/pattern_2.png) | ![pattern_3-6300](6300/previews/pattern_3.png) | ![pattern_4-6300](6300/previews/pattern_4.png) | ![pattern_5-6300](6300/previews/pattern_5.png) | ![pattern_6-6300](6300/previews/pattern_6.png) | ![pattern_7-6300](6300/previews/pattern_7.png) | ![pattern_8-6300](6300/previews/pattern_8.png) | ![pattern_9-6300](6300/previews/pattern_9.png) | ![pattern_10-6300](6300/previews/pattern_10.png) | ![pattern_11-6300](6300/previews/pattern_11.png) | ![pattern_12-6300](6300/previews/pattern_12.png) | ![pattern_13-6300](6300/previews/pattern_13.png) | ![bikini-6300](6300/previews/bikini.png) | [<NSFW, click to see>](6300/previews/bondage.png) | ![free-6300](6300/previews/free.png) | ![maid-6300](6300/previews/maid.png) | ![miko-6300](6300/previews/miko.png) | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) | ![suit-6300](6300/previews/suit.png) | ![yukata-6300](6300/previews/yukata.png) | | **5880** | **0.853** | [**Download**](5880/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-5880](5880/previews/pattern_1.png) | ![pattern_2-5880](5880/previews/pattern_2.png) | ![pattern_3-5880](5880/previews/pattern_3.png) | ![pattern_4-5880](5880/previews/pattern_4.png) | ![pattern_5-5880](5880/previews/pattern_5.png) | ![pattern_6-5880](5880/previews/pattern_6.png) | ![pattern_7-5880](5880/previews/pattern_7.png) | ![pattern_8-5880](5880/previews/pattern_8.png) | ![pattern_9-5880](5880/previews/pattern_9.png) | ![pattern_10-5880](5880/previews/pattern_10.png) | ![pattern_11-5880](5880/previews/pattern_11.png) | ![pattern_12-5880](5880/previews/pattern_12.png) | ![pattern_13-5880](5880/previews/pattern_13.png) | ![bikini-5880](5880/previews/bikini.png) | [<NSFW, click to see>](5880/previews/bondage.png) | ![free-5880](5880/previews/free.png) | ![maid-5880](5880/previews/maid.png) | ![miko-5880](5880/previews/miko.png) | [<NSFW, click to see>](5880/previews/nude.png) | [<NSFW, click to see>](5880/previews/nude2.png) | ![suit-5880](5880/previews/suit.png) | ![yukata-5880](5880/previews/yukata.png) | | 5460 | 0.809 | [Download](5460/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-5460](5460/previews/pattern_1.png) | ![pattern_2-5460](5460/previews/pattern_2.png) | ![pattern_3-5460](5460/previews/pattern_3.png) | ![pattern_4-5460](5460/previews/pattern_4.png) | ![pattern_5-5460](5460/previews/pattern_5.png) | ![pattern_6-5460](5460/previews/pattern_6.png) | ![pattern_7-5460](5460/previews/pattern_7.png) | ![pattern_8-5460](5460/previews/pattern_8.png) | ![pattern_9-5460](5460/previews/pattern_9.png) | ![pattern_10-5460](5460/previews/pattern_10.png) | ![pattern_11-5460](5460/previews/pattern_11.png) | ![pattern_12-5460](5460/previews/pattern_12.png) | ![pattern_13-5460](5460/previews/pattern_13.png) | ![bikini-5460](5460/previews/bikini.png) | [<NSFW, click to see>](5460/previews/bondage.png) | ![free-5460](5460/previews/free.png) | ![maid-5460](5460/previews/maid.png) | ![miko-5460](5460/previews/miko.png) | [<NSFW, click to see>](5460/previews/nude.png) | [<NSFW, click to see>](5460/previews/nude2.png) | ![suit-5460](5460/previews/suit.png) | ![yukata-5460](5460/previews/yukata.png) | | 5040 | 0.825 | [Download](5040/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-5040](5040/previews/pattern_1.png) | ![pattern_2-5040](5040/previews/pattern_2.png) | ![pattern_3-5040](5040/previews/pattern_3.png) | ![pattern_4-5040](5040/previews/pattern_4.png) | ![pattern_5-5040](5040/previews/pattern_5.png) | ![pattern_6-5040](5040/previews/pattern_6.png) | ![pattern_7-5040](5040/previews/pattern_7.png) | ![pattern_8-5040](5040/previews/pattern_8.png) | ![pattern_9-5040](5040/previews/pattern_9.png) | ![pattern_10-5040](5040/previews/pattern_10.png) | ![pattern_11-5040](5040/previews/pattern_11.png) | ![pattern_12-5040](5040/previews/pattern_12.png) | ![pattern_13-5040](5040/previews/pattern_13.png) | ![bikini-5040](5040/previews/bikini.png) | [<NSFW, click to see>](5040/previews/bondage.png) | ![free-5040](5040/previews/free.png) | ![maid-5040](5040/previews/maid.png) | ![miko-5040](5040/previews/miko.png) | [<NSFW, click to see>](5040/previews/nude.png) | [<NSFW, click to see>](5040/previews/nude2.png) | ![suit-5040](5040/previews/suit.png) | ![yukata-5040](5040/previews/yukata.png) | | 4620 | 0.847 | [Download](4620/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-4620](4620/previews/pattern_1.png) | ![pattern_2-4620](4620/previews/pattern_2.png) | ![pattern_3-4620](4620/previews/pattern_3.png) | ![pattern_4-4620](4620/previews/pattern_4.png) | ![pattern_5-4620](4620/previews/pattern_5.png) | ![pattern_6-4620](4620/previews/pattern_6.png) | ![pattern_7-4620](4620/previews/pattern_7.png) | ![pattern_8-4620](4620/previews/pattern_8.png) | ![pattern_9-4620](4620/previews/pattern_9.png) | ![pattern_10-4620](4620/previews/pattern_10.png) | ![pattern_11-4620](4620/previews/pattern_11.png) | ![pattern_12-4620](4620/previews/pattern_12.png) | ![pattern_13-4620](4620/previews/pattern_13.png) | ![bikini-4620](4620/previews/bikini.png) | [<NSFW, click to see>](4620/previews/bondage.png) | ![free-4620](4620/previews/free.png) | ![maid-4620](4620/previews/maid.png) | ![miko-4620](4620/previews/miko.png) | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) | ![suit-4620](4620/previews/suit.png) | ![yukata-4620](4620/previews/yukata.png) | | 4200 | 0.776 | [Download](4200/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-4200](4200/previews/pattern_1.png) | ![pattern_2-4200](4200/previews/pattern_2.png) | ![pattern_3-4200](4200/previews/pattern_3.png) | ![pattern_4-4200](4200/previews/pattern_4.png) | ![pattern_5-4200](4200/previews/pattern_5.png) | ![pattern_6-4200](4200/previews/pattern_6.png) | ![pattern_7-4200](4200/previews/pattern_7.png) | ![pattern_8-4200](4200/previews/pattern_8.png) | ![pattern_9-4200](4200/previews/pattern_9.png) | ![pattern_10-4200](4200/previews/pattern_10.png) | ![pattern_11-4200](4200/previews/pattern_11.png) | ![pattern_12-4200](4200/previews/pattern_12.png) | ![pattern_13-4200](4200/previews/pattern_13.png) | ![bikini-4200](4200/previews/bikini.png) | [<NSFW, click to see>](4200/previews/bondage.png) | ![free-4200](4200/previews/free.png) | ![maid-4200](4200/previews/maid.png) | ![miko-4200](4200/previews/miko.png) | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) | ![suit-4200](4200/previews/suit.png) | ![yukata-4200](4200/previews/yukata.png) | | 3780 | 0.834 | [Download](3780/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-3780](3780/previews/pattern_1.png) | ![pattern_2-3780](3780/previews/pattern_2.png) | ![pattern_3-3780](3780/previews/pattern_3.png) | ![pattern_4-3780](3780/previews/pattern_4.png) | ![pattern_5-3780](3780/previews/pattern_5.png) | ![pattern_6-3780](3780/previews/pattern_6.png) | ![pattern_7-3780](3780/previews/pattern_7.png) | ![pattern_8-3780](3780/previews/pattern_8.png) | ![pattern_9-3780](3780/previews/pattern_9.png) | ![pattern_10-3780](3780/previews/pattern_10.png) | ![pattern_11-3780](3780/previews/pattern_11.png) | ![pattern_12-3780](3780/previews/pattern_12.png) | ![pattern_13-3780](3780/previews/pattern_13.png) | ![bikini-3780](3780/previews/bikini.png) | [<NSFW, click to see>](3780/previews/bondage.png) | ![free-3780](3780/previews/free.png) | ![maid-3780](3780/previews/maid.png) | ![miko-3780](3780/previews/miko.png) | [<NSFW, click to see>](3780/previews/nude.png) | [<NSFW, click to see>](3780/previews/nude2.png) | ![suit-3780](3780/previews/suit.png) | ![yukata-3780](3780/previews/yukata.png) | | 3360 | 0.801 | [Download](3360/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-3360](3360/previews/pattern_1.png) | ![pattern_2-3360](3360/previews/pattern_2.png) | ![pattern_3-3360](3360/previews/pattern_3.png) | ![pattern_4-3360](3360/previews/pattern_4.png) | ![pattern_5-3360](3360/previews/pattern_5.png) | ![pattern_6-3360](3360/previews/pattern_6.png) | ![pattern_7-3360](3360/previews/pattern_7.png) | ![pattern_8-3360](3360/previews/pattern_8.png) | ![pattern_9-3360](3360/previews/pattern_9.png) | ![pattern_10-3360](3360/previews/pattern_10.png) | ![pattern_11-3360](3360/previews/pattern_11.png) | ![pattern_12-3360](3360/previews/pattern_12.png) | ![pattern_13-3360](3360/previews/pattern_13.png) | ![bikini-3360](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) | ![free-3360](3360/previews/free.png) | ![maid-3360](3360/previews/maid.png) | ![miko-3360](3360/previews/miko.png) | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) | ![suit-3360](3360/previews/suit.png) | ![yukata-3360](3360/previews/yukata.png) | | 2940 | 0.819 | [Download](2940/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-2940](2940/previews/pattern_1.png) | ![pattern_2-2940](2940/previews/pattern_2.png) | ![pattern_3-2940](2940/previews/pattern_3.png) | ![pattern_4-2940](2940/previews/pattern_4.png) | ![pattern_5-2940](2940/previews/pattern_5.png) | ![pattern_6-2940](2940/previews/pattern_6.png) | ![pattern_7-2940](2940/previews/pattern_7.png) | ![pattern_8-2940](2940/previews/pattern_8.png) | ![pattern_9-2940](2940/previews/pattern_9.png) | ![pattern_10-2940](2940/previews/pattern_10.png) | ![pattern_11-2940](2940/previews/pattern_11.png) | ![pattern_12-2940](2940/previews/pattern_12.png) | ![pattern_13-2940](2940/previews/pattern_13.png) | ![bikini-2940](2940/previews/bikini.png) | [<NSFW, click to see>](2940/previews/bondage.png) | ![free-2940](2940/previews/free.png) | ![maid-2940](2940/previews/maid.png) | ![miko-2940](2940/previews/miko.png) | [<NSFW, click to see>](2940/previews/nude.png) | [<NSFW, click to see>](2940/previews/nude2.png) | ![suit-2940](2940/previews/suit.png) | ![yukata-2940](2940/previews/yukata.png) | | 2520 | 0.821 | [Download](2520/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-2520](2520/previews/pattern_1.png) | ![pattern_2-2520](2520/previews/pattern_2.png) | ![pattern_3-2520](2520/previews/pattern_3.png) | ![pattern_4-2520](2520/previews/pattern_4.png) | ![pattern_5-2520](2520/previews/pattern_5.png) | ![pattern_6-2520](2520/previews/pattern_6.png) | ![pattern_7-2520](2520/previews/pattern_7.png) | ![pattern_8-2520](2520/previews/pattern_8.png) | ![pattern_9-2520](2520/previews/pattern_9.png) | ![pattern_10-2520](2520/previews/pattern_10.png) | ![pattern_11-2520](2520/previews/pattern_11.png) | ![pattern_12-2520](2520/previews/pattern_12.png) | ![pattern_13-2520](2520/previews/pattern_13.png) | ![bikini-2520](2520/previews/bikini.png) | [<NSFW, click to see>](2520/previews/bondage.png) | ![free-2520](2520/previews/free.png) | ![maid-2520](2520/previews/maid.png) | ![miko-2520](2520/previews/miko.png) | [<NSFW, click to see>](2520/previews/nude.png) | [<NSFW, click to see>](2520/previews/nude2.png) | ![suit-2520](2520/previews/suit.png) | ![yukata-2520](2520/previews/yukata.png) | | 2100 | 0.791 | [Download](2100/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-2100](2100/previews/pattern_1.png) | ![pattern_2-2100](2100/previews/pattern_2.png) | ![pattern_3-2100](2100/previews/pattern_3.png) | ![pattern_4-2100](2100/previews/pattern_4.png) | ![pattern_5-2100](2100/previews/pattern_5.png) | ![pattern_6-2100](2100/previews/pattern_6.png) | ![pattern_7-2100](2100/previews/pattern_7.png) | ![pattern_8-2100](2100/previews/pattern_8.png) | ![pattern_9-2100](2100/previews/pattern_9.png) | ![pattern_10-2100](2100/previews/pattern_10.png) | ![pattern_11-2100](2100/previews/pattern_11.png) | ![pattern_12-2100](2100/previews/pattern_12.png) | ![pattern_13-2100](2100/previews/pattern_13.png) | ![bikini-2100](2100/previews/bikini.png) | [<NSFW, click to see>](2100/previews/bondage.png) | ![free-2100](2100/previews/free.png) | ![maid-2100](2100/previews/maid.png) | ![miko-2100](2100/previews/miko.png) | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) | ![suit-2100](2100/previews/suit.png) | ![yukata-2100](2100/previews/yukata.png) | | 1680 | 0.756 | [Download](1680/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-1680](1680/previews/pattern_1.png) | ![pattern_2-1680](1680/previews/pattern_2.png) | ![pattern_3-1680](1680/previews/pattern_3.png) | ![pattern_4-1680](1680/previews/pattern_4.png) | ![pattern_5-1680](1680/previews/pattern_5.png) | ![pattern_6-1680](1680/previews/pattern_6.png) | ![pattern_7-1680](1680/previews/pattern_7.png) | ![pattern_8-1680](1680/previews/pattern_8.png) | ![pattern_9-1680](1680/previews/pattern_9.png) | ![pattern_10-1680](1680/previews/pattern_10.png) | ![pattern_11-1680](1680/previews/pattern_11.png) | ![pattern_12-1680](1680/previews/pattern_12.png) | ![pattern_13-1680](1680/previews/pattern_13.png) | ![bikini-1680](1680/previews/bikini.png) | [<NSFW, click to see>](1680/previews/bondage.png) | ![free-1680](1680/previews/free.png) | ![maid-1680](1680/previews/maid.png) | ![miko-1680](1680/previews/miko.png) | [<NSFW, click to see>](1680/previews/nude.png) | [<NSFW, click to see>](1680/previews/nude2.png) | ![suit-1680](1680/previews/suit.png) | ![yukata-1680](1680/previews/yukata.png) | | 1260 | 0.758 | [Download](1260/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-1260](1260/previews/pattern_1.png) | ![pattern_2-1260](1260/previews/pattern_2.png) | ![pattern_3-1260](1260/previews/pattern_3.png) | ![pattern_4-1260](1260/previews/pattern_4.png) | ![pattern_5-1260](1260/previews/pattern_5.png) | ![pattern_6-1260](1260/previews/pattern_6.png) | ![pattern_7-1260](1260/previews/pattern_7.png) | ![pattern_8-1260](1260/previews/pattern_8.png) | ![pattern_9-1260](1260/previews/pattern_9.png) | ![pattern_10-1260](1260/previews/pattern_10.png) | ![pattern_11-1260](1260/previews/pattern_11.png) | ![pattern_12-1260](1260/previews/pattern_12.png) | ![pattern_13-1260](1260/previews/pattern_13.png) | ![bikini-1260](1260/previews/bikini.png) | [<NSFW, click to see>](1260/previews/bondage.png) | ![free-1260](1260/previews/free.png) | ![maid-1260](1260/previews/maid.png) | ![miko-1260](1260/previews/miko.png) | [<NSFW, click to see>](1260/previews/nude.png) | [<NSFW, click to see>](1260/previews/nude2.png) | ![suit-1260](1260/previews/suit.png) | ![yukata-1260](1260/previews/yukata.png) | | 840 | 0.703 | [Download](840/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-840](840/previews/pattern_1.png) | ![pattern_2-840](840/previews/pattern_2.png) | ![pattern_3-840](840/previews/pattern_3.png) | ![pattern_4-840](840/previews/pattern_4.png) | ![pattern_5-840](840/previews/pattern_5.png) | ![pattern_6-840](840/previews/pattern_6.png) | ![pattern_7-840](840/previews/pattern_7.png) | ![pattern_8-840](840/previews/pattern_8.png) | ![pattern_9-840](840/previews/pattern_9.png) | ![pattern_10-840](840/previews/pattern_10.png) | ![pattern_11-840](840/previews/pattern_11.png) | ![pattern_12-840](840/previews/pattern_12.png) | ![pattern_13-840](840/previews/pattern_13.png) | ![bikini-840](840/previews/bikini.png) | [<NSFW, click to see>](840/previews/bondage.png) | ![free-840](840/previews/free.png) | ![maid-840](840/previews/maid.png) | ![miko-840](840/previews/miko.png) | [<NSFW, click to see>](840/previews/nude.png) | [<NSFW, click to see>](840/previews/nude2.png) | ![suit-840](840/previews/suit.png) | ![yukata-840](840/previews/yukata.png) | | 420 | 0.647 | [Download](420/ashisu_sahoto_mangakasantoassistantsanto.zip) | ![pattern_1-420](420/previews/pattern_1.png) | ![pattern_2-420](420/previews/pattern_2.png) | ![pattern_3-420](420/previews/pattern_3.png) | ![pattern_4-420](420/previews/pattern_4.png) | ![pattern_5-420](420/previews/pattern_5.png) | ![pattern_6-420](420/previews/pattern_6.png) | ![pattern_7-420](420/previews/pattern_7.png) | ![pattern_8-420](420/previews/pattern_8.png) | ![pattern_9-420](420/previews/pattern_9.png) | ![pattern_10-420](420/previews/pattern_10.png) | ![pattern_11-420](420/previews/pattern_11.png) | ![pattern_12-420](420/previews/pattern_12.png) | ![pattern_13-420](420/previews/pattern_13.png) | ![bikini-420](420/previews/bikini.png) | [<NSFW, click to see>](420/previews/bondage.png) | ![free-420](420/previews/free.png) | ![maid-420](420/previews/maid.png) | ![miko-420](420/previews/miko.png) | [<NSFW, click to see>](420/previews/nude.png) | [<NSFW, click to see>](420/previews/nude2.png) | ![suit-420](420/previews/suit.png) | ![yukata-420](420/previews/yukata.png) |
MarcosMunoz95/ppo-CartPole-v1
MarcosMunoz95
2023-09-22T23:16:24Z
0
0
null
[ "tensorboard", "CartPole-v1", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T22:59:55Z
--- tags: - CartPole-v1 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 18.80 +/- 6.23 name: mean_reward verified: false --- # PPO Agent Playing CartPole-v1 This is a trained model of a PPO agent playing CartPole-v1. # Hyperparameters
CyberHarem/fuwa_rinna_mangakasantoassistantsanto
CyberHarem
2023-09-22T23:16:22Z
0
1
null
[ "art", "text-to-image", "dataset:CyberHarem/fuwa_rinna_mangakasantoassistantsanto", "license:mit", "region:us" ]
text-to-image
2023-08-30T15:52:08Z
--- license: mit datasets: - CyberHarem/fuwa_rinna_mangakasantoassistantsanto pipeline_tag: text-to-image tags: - art --- # Lora of fuwa_rinna_mangakasantoassistantsanto This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 1560, you need to download `1560/fuwa_rinna_mangakasantoassistantsanto.pt` as the embedding and `1560/fuwa_rinna_mangakasantoassistantsanto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 1560**, with the score of 0.867. The trigger words are: 1. `fuwa_rinna_mangakasantoassistantsanto` 2. `brown_hair, long_hair, open_mouth, blush, purple_eyes, smile, closed_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:---------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 3900 | 0.859 | [Download](3900/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](3900/previews/pattern_1.png) | ![pattern_2-3900](3900/previews/pattern_2.png) | ![pattern_3-3900](3900/previews/pattern_3.png) | ![pattern_4-3900](3900/previews/pattern_4.png) | ![pattern_5-3900](3900/previews/pattern_5.png) | [<NSFW, click to see>](3900/previews/pattern_6.png) | ![pattern_7-3900](3900/previews/pattern_7.png) | ![pattern_8-3900](3900/previews/pattern_8.png) | ![pattern_9-3900](3900/previews/pattern_9.png) | [<NSFW, click to see>](3900/previews/bikini.png) | [<NSFW, click to see>](3900/previews/bondage.png) | ![free-3900](3900/previews/free.png) | ![maid-3900](3900/previews/maid.png) | ![miko-3900](3900/previews/miko.png) | [<NSFW, click to see>](3900/previews/nude.png) | [<NSFW, click to see>](3900/previews/nude2.png) | ![suit-3900](3900/previews/suit.png) | ![yukata-3900](3900/previews/yukata.png) | | 3640 | 0.859 | [Download](3640/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](3640/previews/pattern_1.png) | ![pattern_2-3640](3640/previews/pattern_2.png) | ![pattern_3-3640](3640/previews/pattern_3.png) | ![pattern_4-3640](3640/previews/pattern_4.png) | ![pattern_5-3640](3640/previews/pattern_5.png) | [<NSFW, click to see>](3640/previews/pattern_6.png) | ![pattern_7-3640](3640/previews/pattern_7.png) | ![pattern_8-3640](3640/previews/pattern_8.png) | ![pattern_9-3640](3640/previews/pattern_9.png) | [<NSFW, click to see>](3640/previews/bikini.png) | [<NSFW, click to see>](3640/previews/bondage.png) | ![free-3640](3640/previews/free.png) | ![maid-3640](3640/previews/maid.png) | ![miko-3640](3640/previews/miko.png) | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) | ![suit-3640](3640/previews/suit.png) | ![yukata-3640](3640/previews/yukata.png) | | 3380 | 0.861 | [Download](3380/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](3380/previews/pattern_1.png) | ![pattern_2-3380](3380/previews/pattern_2.png) | ![pattern_3-3380](3380/previews/pattern_3.png) | ![pattern_4-3380](3380/previews/pattern_4.png) | ![pattern_5-3380](3380/previews/pattern_5.png) | [<NSFW, click to see>](3380/previews/pattern_6.png) | ![pattern_7-3380](3380/previews/pattern_7.png) | ![pattern_8-3380](3380/previews/pattern_8.png) | ![pattern_9-3380](3380/previews/pattern_9.png) | [<NSFW, click to see>](3380/previews/bikini.png) | [<NSFW, click to see>](3380/previews/bondage.png) | ![free-3380](3380/previews/free.png) | ![maid-3380](3380/previews/maid.png) | ![miko-3380](3380/previews/miko.png) | [<NSFW, click to see>](3380/previews/nude.png) | [<NSFW, click to see>](3380/previews/nude2.png) | ![suit-3380](3380/previews/suit.png) | ![yukata-3380](3380/previews/yukata.png) | | 3120 | 0.846 | [Download](3120/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](3120/previews/pattern_1.png) | ![pattern_2-3120](3120/previews/pattern_2.png) | ![pattern_3-3120](3120/previews/pattern_3.png) | ![pattern_4-3120](3120/previews/pattern_4.png) | ![pattern_5-3120](3120/previews/pattern_5.png) | [<NSFW, click to see>](3120/previews/pattern_6.png) | ![pattern_7-3120](3120/previews/pattern_7.png) | ![pattern_8-3120](3120/previews/pattern_8.png) | ![pattern_9-3120](3120/previews/pattern_9.png) | [<NSFW, click to see>](3120/previews/bikini.png) | [<NSFW, click to see>](3120/previews/bondage.png) | ![free-3120](3120/previews/free.png) | ![maid-3120](3120/previews/maid.png) | ![miko-3120](3120/previews/miko.png) | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) | ![suit-3120](3120/previews/suit.png) | ![yukata-3120](3120/previews/yukata.png) | | 2860 | 0.810 | [Download](2860/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](2860/previews/pattern_1.png) | ![pattern_2-2860](2860/previews/pattern_2.png) | ![pattern_3-2860](2860/previews/pattern_3.png) | ![pattern_4-2860](2860/previews/pattern_4.png) | ![pattern_5-2860](2860/previews/pattern_5.png) | [<NSFW, click to see>](2860/previews/pattern_6.png) | ![pattern_7-2860](2860/previews/pattern_7.png) | ![pattern_8-2860](2860/previews/pattern_8.png) | ![pattern_9-2860](2860/previews/pattern_9.png) | [<NSFW, click to see>](2860/previews/bikini.png) | [<NSFW, click to see>](2860/previews/bondage.png) | ![free-2860](2860/previews/free.png) | ![maid-2860](2860/previews/maid.png) | ![miko-2860](2860/previews/miko.png) | [<NSFW, click to see>](2860/previews/nude.png) | [<NSFW, click to see>](2860/previews/nude2.png) | ![suit-2860](2860/previews/suit.png) | ![yukata-2860](2860/previews/yukata.png) | | 2600 | 0.847 | [Download](2600/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](2600/previews/pattern_1.png) | ![pattern_2-2600](2600/previews/pattern_2.png) | ![pattern_3-2600](2600/previews/pattern_3.png) | ![pattern_4-2600](2600/previews/pattern_4.png) | ![pattern_5-2600](2600/previews/pattern_5.png) | [<NSFW, click to see>](2600/previews/pattern_6.png) | ![pattern_7-2600](2600/previews/pattern_7.png) | ![pattern_8-2600](2600/previews/pattern_8.png) | ![pattern_9-2600](2600/previews/pattern_9.png) | [<NSFW, click to see>](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) | ![free-2600](2600/previews/free.png) | ![maid-2600](2600/previews/maid.png) | ![miko-2600](2600/previews/miko.png) | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) | ![suit-2600](2600/previews/suit.png) | ![yukata-2600](2600/previews/yukata.png) | | 2340 | 0.795 | [Download](2340/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](2340/previews/pattern_1.png) | ![pattern_2-2340](2340/previews/pattern_2.png) | ![pattern_3-2340](2340/previews/pattern_3.png) | ![pattern_4-2340](2340/previews/pattern_4.png) | ![pattern_5-2340](2340/previews/pattern_5.png) | [<NSFW, click to see>](2340/previews/pattern_6.png) | ![pattern_7-2340](2340/previews/pattern_7.png) | ![pattern_8-2340](2340/previews/pattern_8.png) | ![pattern_9-2340](2340/previews/pattern_9.png) | [<NSFW, click to see>](2340/previews/bikini.png) | [<NSFW, click to see>](2340/previews/bondage.png) | ![free-2340](2340/previews/free.png) | ![maid-2340](2340/previews/maid.png) | ![miko-2340](2340/previews/miko.png) | [<NSFW, click to see>](2340/previews/nude.png) | [<NSFW, click to see>](2340/previews/nude2.png) | ![suit-2340](2340/previews/suit.png) | ![yukata-2340](2340/previews/yukata.png) | | 2080 | 0.775 | [Download](2080/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](2080/previews/pattern_1.png) | ![pattern_2-2080](2080/previews/pattern_2.png) | ![pattern_3-2080](2080/previews/pattern_3.png) | ![pattern_4-2080](2080/previews/pattern_4.png) | ![pattern_5-2080](2080/previews/pattern_5.png) | [<NSFW, click to see>](2080/previews/pattern_6.png) | ![pattern_7-2080](2080/previews/pattern_7.png) | ![pattern_8-2080](2080/previews/pattern_8.png) | ![pattern_9-2080](2080/previews/pattern_9.png) | [<NSFW, click to see>](2080/previews/bikini.png) | [<NSFW, click to see>](2080/previews/bondage.png) | ![free-2080](2080/previews/free.png) | ![maid-2080](2080/previews/maid.png) | ![miko-2080](2080/previews/miko.png) | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) | ![suit-2080](2080/previews/suit.png) | ![yukata-2080](2080/previews/yukata.png) | | 1820 | 0.835 | [Download](1820/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](1820/previews/pattern_1.png) | ![pattern_2-1820](1820/previews/pattern_2.png) | ![pattern_3-1820](1820/previews/pattern_3.png) | ![pattern_4-1820](1820/previews/pattern_4.png) | ![pattern_5-1820](1820/previews/pattern_5.png) | [<NSFW, click to see>](1820/previews/pattern_6.png) | ![pattern_7-1820](1820/previews/pattern_7.png) | ![pattern_8-1820](1820/previews/pattern_8.png) | ![pattern_9-1820](1820/previews/pattern_9.png) | [<NSFW, click to see>](1820/previews/bikini.png) | [<NSFW, click to see>](1820/previews/bondage.png) | ![free-1820](1820/previews/free.png) | ![maid-1820](1820/previews/maid.png) | ![miko-1820](1820/previews/miko.png) | [<NSFW, click to see>](1820/previews/nude.png) | [<NSFW, click to see>](1820/previews/nude2.png) | ![suit-1820](1820/previews/suit.png) | ![yukata-1820](1820/previews/yukata.png) | | **1560** | **0.867** | [**Download**](1560/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](1560/previews/pattern_1.png) | ![pattern_2-1560](1560/previews/pattern_2.png) | ![pattern_3-1560](1560/previews/pattern_3.png) | ![pattern_4-1560](1560/previews/pattern_4.png) | ![pattern_5-1560](1560/previews/pattern_5.png) | [<NSFW, click to see>](1560/previews/pattern_6.png) | ![pattern_7-1560](1560/previews/pattern_7.png) | ![pattern_8-1560](1560/previews/pattern_8.png) | ![pattern_9-1560](1560/previews/pattern_9.png) | [<NSFW, click to see>](1560/previews/bikini.png) | [<NSFW, click to see>](1560/previews/bondage.png) | ![free-1560](1560/previews/free.png) | ![maid-1560](1560/previews/maid.png) | ![miko-1560](1560/previews/miko.png) | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) | ![suit-1560](1560/previews/suit.png) | ![yukata-1560](1560/previews/yukata.png) | | 1300 | 0.858 | [Download](1300/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](1300/previews/pattern_1.png) | ![pattern_2-1300](1300/previews/pattern_2.png) | ![pattern_3-1300](1300/previews/pattern_3.png) | ![pattern_4-1300](1300/previews/pattern_4.png) | ![pattern_5-1300](1300/previews/pattern_5.png) | [<NSFW, click to see>](1300/previews/pattern_6.png) | ![pattern_7-1300](1300/previews/pattern_7.png) | ![pattern_8-1300](1300/previews/pattern_8.png) | ![pattern_9-1300](1300/previews/pattern_9.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/bondage.png) | ![free-1300](1300/previews/free.png) | ![maid-1300](1300/previews/maid.png) | ![miko-1300](1300/previews/miko.png) | [<NSFW, click to see>](1300/previews/nude.png) | [<NSFW, click to see>](1300/previews/nude2.png) | ![suit-1300](1300/previews/suit.png) | ![yukata-1300](1300/previews/yukata.png) | | 1040 | 0.792 | [Download](1040/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](1040/previews/pattern_1.png) | ![pattern_2-1040](1040/previews/pattern_2.png) | ![pattern_3-1040](1040/previews/pattern_3.png) | ![pattern_4-1040](1040/previews/pattern_4.png) | ![pattern_5-1040](1040/previews/pattern_5.png) | [<NSFW, click to see>](1040/previews/pattern_6.png) | ![pattern_7-1040](1040/previews/pattern_7.png) | ![pattern_8-1040](1040/previews/pattern_8.png) | ![pattern_9-1040](1040/previews/pattern_9.png) | [<NSFW, click to see>](1040/previews/bikini.png) | [<NSFW, click to see>](1040/previews/bondage.png) | ![free-1040](1040/previews/free.png) | ![maid-1040](1040/previews/maid.png) | ![miko-1040](1040/previews/miko.png) | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) | ![suit-1040](1040/previews/suit.png) | ![yukata-1040](1040/previews/yukata.png) | | 780 | 0.634 | [Download](780/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](780/previews/pattern_1.png) | ![pattern_2-780](780/previews/pattern_2.png) | ![pattern_3-780](780/previews/pattern_3.png) | ![pattern_4-780](780/previews/pattern_4.png) | ![pattern_5-780](780/previews/pattern_5.png) | [<NSFW, click to see>](780/previews/pattern_6.png) | ![pattern_7-780](780/previews/pattern_7.png) | ![pattern_8-780](780/previews/pattern_8.png) | ![pattern_9-780](780/previews/pattern_9.png) | [<NSFW, click to see>](780/previews/bikini.png) | [<NSFW, click to see>](780/previews/bondage.png) | ![free-780](780/previews/free.png) | ![maid-780](780/previews/maid.png) | ![miko-780](780/previews/miko.png) | [<NSFW, click to see>](780/previews/nude.png) | [<NSFW, click to see>](780/previews/nude2.png) | ![suit-780](780/previews/suit.png) | ![yukata-780](780/previews/yukata.png) | | 520 | 0.572 | [Download](520/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](520/previews/pattern_1.png) | ![pattern_2-520](520/previews/pattern_2.png) | ![pattern_3-520](520/previews/pattern_3.png) | ![pattern_4-520](520/previews/pattern_4.png) | ![pattern_5-520](520/previews/pattern_5.png) | [<NSFW, click to see>](520/previews/pattern_6.png) | ![pattern_7-520](520/previews/pattern_7.png) | ![pattern_8-520](520/previews/pattern_8.png) | ![pattern_9-520](520/previews/pattern_9.png) | [<NSFW, click to see>](520/previews/bikini.png) | [<NSFW, click to see>](520/previews/bondage.png) | ![free-520](520/previews/free.png) | ![maid-520](520/previews/maid.png) | ![miko-520](520/previews/miko.png) | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) | ![suit-520](520/previews/suit.png) | ![yukata-520](520/previews/yukata.png) | | 260 | 0.147 | [Download](260/fuwa_rinna_mangakasantoassistantsanto.zip) | [<NSFW, click to see>](260/previews/pattern_1.png) | ![pattern_2-260](260/previews/pattern_2.png) | ![pattern_3-260](260/previews/pattern_3.png) | ![pattern_4-260](260/previews/pattern_4.png) | ![pattern_5-260](260/previews/pattern_5.png) | [<NSFW, click to see>](260/previews/pattern_6.png) | ![pattern_7-260](260/previews/pattern_7.png) | ![pattern_8-260](260/previews/pattern_8.png) | ![pattern_9-260](260/previews/pattern_9.png) | [<NSFW, click to see>](260/previews/bikini.png) | [<NSFW, click to see>](260/previews/bondage.png) | ![free-260](260/previews/free.png) | ![maid-260](260/previews/maid.png) | ![miko-260](260/previews/miko.png) | [<NSFW, click to see>](260/previews/nude.png) | [<NSFW, click to see>](260/previews/nude2.png) | ![suit-260](260/previews/suit.png) | ![yukata-260](260/previews/yukata.png) |
CyberHarem/ashisu_sahono_mangakasantoassistantsanto
CyberHarem
2023-09-22T23:07:44Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/ashisu_sahono_mangakasantoassistantsanto", "license:mit", "region:us" ]
text-to-image
2023-08-30T14:54:12Z
--- license: mit datasets: - CyberHarem/ashisu_sahono_mangakasantoassistantsanto pipeline_tag: text-to-image tags: - art --- # Lora of ashisu_sahono_mangakasantoassistantsanto This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 2400, you need to download `2400/ashisu_sahono_mangakasantoassistantsanto.pt` as the embedding and `2400/ashisu_sahono_mangakasantoassistantsanto.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 2400**, with the score of 0.736. The trigger words are: 1. `ashisu_sahono_mangakasantoassistantsanto` 2. `black_hair, short_hair, brown_eyes, open_mouth, hair_bobbles, hair_ornament, side_ponytail, smile` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 3000 | 0.693 | [Download](3000/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-3000](3000/previews/pattern_1.png) | ![pattern_2-3000](3000/previews/pattern_2.png) | ![pattern_3-3000](3000/previews/pattern_3.png) | ![pattern_4-3000](3000/previews/pattern_4.png) | ![pattern_5-3000](3000/previews/pattern_5.png) | ![pattern_6-3000](3000/previews/pattern_6.png) | ![bikini-3000](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) | ![free-3000](3000/previews/free.png) | ![maid-3000](3000/previews/maid.png) | ![miko-3000](3000/previews/miko.png) | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) | ![suit-3000](3000/previews/suit.png) | ![yukata-3000](3000/previews/yukata.png) | | 2800 | 0.702 | [Download](2800/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-2800](2800/previews/pattern_1.png) | ![pattern_2-2800](2800/previews/pattern_2.png) | ![pattern_3-2800](2800/previews/pattern_3.png) | ![pattern_4-2800](2800/previews/pattern_4.png) | ![pattern_5-2800](2800/previews/pattern_5.png) | ![pattern_6-2800](2800/previews/pattern_6.png) | ![bikini-2800](2800/previews/bikini.png) | [<NSFW, click to see>](2800/previews/bondage.png) | ![free-2800](2800/previews/free.png) | ![maid-2800](2800/previews/maid.png) | ![miko-2800](2800/previews/miko.png) | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) | ![suit-2800](2800/previews/suit.png) | ![yukata-2800](2800/previews/yukata.png) | | 2600 | 0.694 | [Download](2600/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-2600](2600/previews/pattern_1.png) | ![pattern_2-2600](2600/previews/pattern_2.png) | ![pattern_3-2600](2600/previews/pattern_3.png) | ![pattern_4-2600](2600/previews/pattern_4.png) | ![pattern_5-2600](2600/previews/pattern_5.png) | ![pattern_6-2600](2600/previews/pattern_6.png) | ![bikini-2600](2600/previews/bikini.png) | [<NSFW, click to see>](2600/previews/bondage.png) | ![free-2600](2600/previews/free.png) | ![maid-2600](2600/previews/maid.png) | ![miko-2600](2600/previews/miko.png) | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) | ![suit-2600](2600/previews/suit.png) | ![yukata-2600](2600/previews/yukata.png) | | **2400** | **0.736** | [**Download**](2400/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![pattern_5-2400](2400/previews/pattern_5.png) | ![pattern_6-2400](2400/previews/pattern_6.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 2200 | 0.635 | [Download](2200/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-2200](2200/previews/pattern_1.png) | ![pattern_2-2200](2200/previews/pattern_2.png) | ![pattern_3-2200](2200/previews/pattern_3.png) | ![pattern_4-2200](2200/previews/pattern_4.png) | ![pattern_5-2200](2200/previews/pattern_5.png) | ![pattern_6-2200](2200/previews/pattern_6.png) | ![bikini-2200](2200/previews/bikini.png) | [<NSFW, click to see>](2200/previews/bondage.png) | ![free-2200](2200/previews/free.png) | ![maid-2200](2200/previews/maid.png) | ![miko-2200](2200/previews/miko.png) | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) | ![suit-2200](2200/previews/suit.png) | ![yukata-2200](2200/previews/yukata.png) | | 2000 | 0.623 | [Download](2000/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-2000](2000/previews/pattern_1.png) | ![pattern_2-2000](2000/previews/pattern_2.png) | ![pattern_3-2000](2000/previews/pattern_3.png) | ![pattern_4-2000](2000/previews/pattern_4.png) | ![pattern_5-2000](2000/previews/pattern_5.png) | ![pattern_6-2000](2000/previews/pattern_6.png) | ![bikini-2000](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) | ![free-2000](2000/previews/free.png) | ![maid-2000](2000/previews/maid.png) | ![miko-2000](2000/previews/miko.png) | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) | ![suit-2000](2000/previews/suit.png) | ![yukata-2000](2000/previews/yukata.png) | | 1800 | 0.546 | [Download](1800/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-1800](1800/previews/pattern_1.png) | ![pattern_2-1800](1800/previews/pattern_2.png) | ![pattern_3-1800](1800/previews/pattern_3.png) | ![pattern_4-1800](1800/previews/pattern_4.png) | ![pattern_5-1800](1800/previews/pattern_5.png) | ![pattern_6-1800](1800/previews/pattern_6.png) | ![bikini-1800](1800/previews/bikini.png) | [<NSFW, click to see>](1800/previews/bondage.png) | ![free-1800](1800/previews/free.png) | ![maid-1800](1800/previews/maid.png) | ![miko-1800](1800/previews/miko.png) | [<NSFW, click to see>](1800/previews/nude.png) | [<NSFW, click to see>](1800/previews/nude2.png) | ![suit-1800](1800/previews/suit.png) | ![yukata-1800](1800/previews/yukata.png) | | 1600 | 0.640 | [Download](1600/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-1600](1600/previews/pattern_1.png) | ![pattern_2-1600](1600/previews/pattern_2.png) | ![pattern_3-1600](1600/previews/pattern_3.png) | ![pattern_4-1600](1600/previews/pattern_4.png) | ![pattern_5-1600](1600/previews/pattern_5.png) | ![pattern_6-1600](1600/previews/pattern_6.png) | ![bikini-1600](1600/previews/bikini.png) | [<NSFW, click to see>](1600/previews/bondage.png) | ![free-1600](1600/previews/free.png) | ![maid-1600](1600/previews/maid.png) | ![miko-1600](1600/previews/miko.png) | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) | ![suit-1600](1600/previews/suit.png) | ![yukata-1600](1600/previews/yukata.png) | | 1400 | 0.663 | [Download](1400/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-1400](1400/previews/pattern_1.png) | ![pattern_2-1400](1400/previews/pattern_2.png) | ![pattern_3-1400](1400/previews/pattern_3.png) | ![pattern_4-1400](1400/previews/pattern_4.png) | ![pattern_5-1400](1400/previews/pattern_5.png) | ![pattern_6-1400](1400/previews/pattern_6.png) | ![bikini-1400](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/bondage.png) | ![free-1400](1400/previews/free.png) | ![maid-1400](1400/previews/maid.png) | ![miko-1400](1400/previews/miko.png) | [<NSFW, click to see>](1400/previews/nude.png) | [<NSFW, click to see>](1400/previews/nude2.png) | ![suit-1400](1400/previews/suit.png) | ![yukata-1400](1400/previews/yukata.png) | | 1200 | 0.577 | [Download](1200/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-1200](1200/previews/pattern_1.png) | ![pattern_2-1200](1200/previews/pattern_2.png) | ![pattern_3-1200](1200/previews/pattern_3.png) | ![pattern_4-1200](1200/previews/pattern_4.png) | ![pattern_5-1200](1200/previews/pattern_5.png) | ![pattern_6-1200](1200/previews/pattern_6.png) | ![bikini-1200](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/bondage.png) | ![free-1200](1200/previews/free.png) | ![maid-1200](1200/previews/maid.png) | ![miko-1200](1200/previews/miko.png) | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) | ![suit-1200](1200/previews/suit.png) | ![yukata-1200](1200/previews/yukata.png) | | 1000 | 0.631 | [Download](1000/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-1000](1000/previews/pattern_1.png) | ![pattern_2-1000](1000/previews/pattern_2.png) | ![pattern_3-1000](1000/previews/pattern_3.png) | ![pattern_4-1000](1000/previews/pattern_4.png) | ![pattern_5-1000](1000/previews/pattern_5.png) | ![pattern_6-1000](1000/previews/pattern_6.png) | ![bikini-1000](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) | ![free-1000](1000/previews/free.png) | ![maid-1000](1000/previews/maid.png) | ![miko-1000](1000/previews/miko.png) | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) | ![suit-1000](1000/previews/suit.png) | ![yukata-1000](1000/previews/yukata.png) | | 800 | 0.464 | [Download](800/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-800](800/previews/pattern_1.png) | ![pattern_2-800](800/previews/pattern_2.png) | ![pattern_3-800](800/previews/pattern_3.png) | ![pattern_4-800](800/previews/pattern_4.png) | ![pattern_5-800](800/previews/pattern_5.png) | ![pattern_6-800](800/previews/pattern_6.png) | ![bikini-800](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/bondage.png) | ![free-800](800/previews/free.png) | ![maid-800](800/previews/maid.png) | ![miko-800](800/previews/miko.png) | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) | ![suit-800](800/previews/suit.png) | ![yukata-800](800/previews/yukata.png) | | 600 | 0.453 | [Download](600/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-600](600/previews/pattern_1.png) | ![pattern_2-600](600/previews/pattern_2.png) | ![pattern_3-600](600/previews/pattern_3.png) | ![pattern_4-600](600/previews/pattern_4.png) | ![pattern_5-600](600/previews/pattern_5.png) | ![pattern_6-600](600/previews/pattern_6.png) | ![bikini-600](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/bondage.png) | ![free-600](600/previews/free.png) | ![maid-600](600/previews/maid.png) | ![miko-600](600/previews/miko.png) | [<NSFW, click to see>](600/previews/nude.png) | [<NSFW, click to see>](600/previews/nude2.png) | ![suit-600](600/previews/suit.png) | ![yukata-600](600/previews/yukata.png) | | 400 | 0.385 | [Download](400/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-400](400/previews/pattern_1.png) | ![pattern_2-400](400/previews/pattern_2.png) | ![pattern_3-400](400/previews/pattern_3.png) | ![pattern_4-400](400/previews/pattern_4.png) | ![pattern_5-400](400/previews/pattern_5.png) | ![pattern_6-400](400/previews/pattern_6.png) | ![bikini-400](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/bondage.png) | ![free-400](400/previews/free.png) | ![maid-400](400/previews/maid.png) | ![miko-400](400/previews/miko.png) | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) | ![suit-400](400/previews/suit.png) | ![yukata-400](400/previews/yukata.png) | | 200 | 0.192 | [Download](200/ashisu_sahono_mangakasantoassistantsanto.zip) | ![pattern_1-200](200/previews/pattern_1.png) | ![pattern_2-200](200/previews/pattern_2.png) | ![pattern_3-200](200/previews/pattern_3.png) | ![pattern_4-200](200/previews/pattern_4.png) | ![pattern_5-200](200/previews/pattern_5.png) | ![pattern_6-200](200/previews/pattern_6.png) | ![bikini-200](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/bondage.png) | ![free-200](200/previews/free.png) | ![maid-200](200/previews/maid.png) | ![miko-200](200/previews/miko.png) | [<NSFW, click to see>](200/previews/nude.png) | [<NSFW, click to see>](200/previews/nude2.png) | ![suit-200](200/previews/suit.png) | ![yukata-200](200/previews/yukata.png) |
OpenDILabCommunity/BipedalWalker-v3-TD3
OpenDILabCommunity
2023-09-22T22:40:13Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "BipedalWalker-v3", "en", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-05-16T12:11:25Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - BipedalWalker-v3 benchmark_name: OpenAI/Gym/Box2d task_name: BipedalWalker-v3 pipeline_tag: reinforcement-learning model-index: - name: TD3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/Box2d-BipedalWalker-v3 type: OpenAI/Gym/Box2d-BipedalWalker-v3 metrics: - type: mean_reward value: 320.88 +/- 0.56 name: mean_reward --- # Play **BipedalWalker-v3** with **TD3** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **TD3** implementation to OpenAI/Gym/Box2d **BipedalWalker-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import TD3Agent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = TD3Agent( env_id="BipedalWalker-v3", exp_name="BipedalWalker-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import TD3Agent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/BipedalWalker-v3-TD3") # Instantiate the agent agent = TD3Agent( env_id="BipedalWalker-v3", exp_name="BipedalWalker-v3-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import TD3Agent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = TD3Agent(env_id="BipedalWalker-v3", exp_name="BipedalWalker-v3-TD3") # Train the agent return_ = agent.train(step=int(200000)) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="BipedalWalker-v3", algo_name="TD3", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/bipedalwalker.html", installation_guide="pip3 install DI-engine[common_env]", usage_file_by_git_clone="./td3/bipedalwalker_td3_deploy.py", usage_file_by_huggingface_ding="./td3/bipedalwalker_td3_download.py", train_file="./td3/bipedalwalker_td3.py", repo_id="OpenDILabCommunity/BipedalWalker-v3-TD3", create_repo=False ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 10000000000, 'n_evaluator_episode': 5, 'env_id': 'BipedalWalker-v3', 'collector_env_num': 8, 'evaluator_env_num': 5, 'act_scale': True, 'rew_clip': True }, 'policy': { 'model': { 'twin_critic': True, 'obs_shape': 24, 'action_shape': 4, 'action_space': 'regression', 'actor_head_hidden_size': 400, 'critic_head_hidden_size': 400 }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 1000, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 64, 'batch_size': 256, 'learning_rate_actor': 0.0003, 'learning_rate_critic': 0.0003, 'ignore_done': False, 'target_theta': 0.005, 'discount_factor': 0.99, 'actor_update_freq': 2, 'noise': True, 'noise_sigma': 0.2, 'noise_range': { 'min': -0.5, 'max': 0.5 } }, 'collect': { 'collector': {}, 'unroll_len': 1, 'noise_sigma': 0.1, 'n_sample': 64 }, 'eval': { 'evaluator': { 'eval_freq': 5000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'figure_path': None, 'cfg_type': 'InteractionSerialEvaluatorDict', 'stop_value': 10000000000, 'n_episode': 5 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 300000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'td3', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': False, 'action_space': 'continuous', 'reward_batch_norm': False, 'multi_agent': False, 'cfg_type': 'TD3PolicyDict' }, 'exp_name': 'BipedalWalker-v3-TD3', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/BipedalWalker-v3-TD3) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/BipedalWalker-v3-TD3/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/BipedalWalker-v3-TD3/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 4037.55 KB - **Last Update Date:** 2023-09-22 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** BipedalWalker-v3 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.9 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/bipedalwalker.html)
Chega/distill-scibert_scivocab_uncased
Chega
2023-09-22T22:23:50Z
125
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "biology", "fill-mask", "en", "dataset:arxiv_dataset", "dataset:pubmed", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2023-09-22T22:18:09Z
--- license: apache-2.0 datasets: - arxiv_dataset - pubmed language: - en pipeline_tag: fill-mask tags: - biology ---
OpenDILabCommunity/LunarLander-v2-TD3
OpenDILabCommunity
2023-09-22T22:10:58Z
0
0
pytorch
[ "pytorch", "deep-reinforcement-learning", "reinforcement-learning", "DI-engine", "LunarLander-v2", "en", "license:apache-2.0", "region:us" ]
reinforcement-learning
2023-04-17T08:35:24Z
--- language: en license: apache-2.0 library_name: pytorch tags: - deep-reinforcement-learning - reinforcement-learning - DI-engine - LunarLander-v2 benchmark_name: OpenAI/Gym/Box2d task_name: LunarLander-v2 pipeline_tag: reinforcement-learning model-index: - name: TD3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: OpenAI/Gym/Box2d-LunarLander-v2 type: OpenAI/Gym/Box2d-LunarLander-v2 metrics: - type: mean_reward value: 235.02 +/- 78.67 name: mean_reward --- # Play **LunarLander-v2** with **TD3** Policy ## Model Description <!-- Provide a longer summary of what this model is. --> This is a simple **TD3** implementation to OpenAI/Gym/Box2d **LunarLander-v2** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo). **DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework. ## Model Usage ### Install the Dependencies <details close> <summary>(Click for Details)</summary> ```shell # install huggingface_ding git clone https://github.com/opendilab/huggingface_ding.git pip3 install -e ./huggingface_ding/ # install environment dependencies if needed pip3 install DI-engine[common_env] ``` </details> ### Git Clone from Huggingface and Run the Model <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import TD3Agent from ding.config import Config from easydict import EasyDict import torch # Pull model from files which are git cloned from huggingface policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) # Instantiate the agent agent = TD3Agent( env_id="LunarLanderContinuous-v2", exp_name="LunarLander-v2-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ### Run Model by Using Huggingface_ding <details close> <summary>(Click for Details)</summary> ```shell # running with trained model python3 -u run.py ``` **run.py** ```python from ding.bonus import TD3Agent from huggingface_ding import pull_model_from_hub # Pull model from Hugggingface hub policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/LunarLander-v2-TD3") # Instantiate the agent agent = TD3Agent( env_id="LunarLanderContinuous-v2", exp_name="LunarLander-v2-TD3", cfg=cfg.exp_config, policy_state_dict=policy_state_dict ) # Continue training agent.train(step=5000) # Render the new agent performance agent.deploy(enable_save_replay=True) ``` </details> ## Model Training ### Train the Model and Push to Huggingface_hub <details close> <summary>(Click for Details)</summary> ```shell #Training Your Own Agent python3 -u train.py ``` **train.py** ```python from ding.bonus import TD3Agent from huggingface_ding import push_model_to_hub # Instantiate the agent agent = TD3Agent(env_id="LunarLanderContinuous-v2", exp_name="LunarLander-v2-TD3") # Train the agent return_ = agent.train(step=int(4000000), collector_env_num=4, evaluator_env_num=4) # Push model to huggingface hub push_model_to_hub( agent=agent.best, env_name="OpenAI/Gym/Box2d", task_name="LunarLander-v2", algo_name="TD3", wandb_url=return_.wandb_url, github_repo_url="https://github.com/opendilab/DI-engine", github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html", github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html", installation_guide="pip3 install DI-engine[common_env]", usage_file_by_git_clone="./td3/lunarlander_td3_deploy.py", usage_file_by_huggingface_ding="./td3/lunarlander_td3_download.py", train_file="./td3/lunarlander_td3.py", repo_id="OpenDILabCommunity/LunarLander-v2-TD3", create_repo=False ) ``` </details> **Configuration** <details close> <summary>(Click for Details)</summary> ```python exp_config = { 'env': { 'manager': { 'episode_num': float("inf"), 'max_retry': 1, 'retry_type': 'reset', 'auto_reset': True, 'step_timeout': None, 'reset_timeout': None, 'retry_waiting_time': 0.1, 'cfg_type': 'BaseEnvManagerDict' }, 'stop_value': 240, 'n_evaluator_episode': 8, 'env_id': 'LunarLanderContinuous-v2', 'collector_env_num': 4, 'evaluator_env_num': 8, 'act_scale': True }, 'policy': { 'model': { 'twin_critic': True, 'obs_shape': 8, 'action_shape': 2, 'action_space': 'regression' }, 'learn': { 'learner': { 'train_iterations': 1000000000, 'dataloader': { 'num_workers': 0 }, 'log_policy': True, 'hook': { 'load_ckpt_before_run': '', 'log_show_after_iter': 100, 'save_ckpt_after_iter': 10000, 'save_ckpt_after_run': True }, 'cfg_type': 'BaseLearnerDict' }, 'update_per_collect': 256, 'batch_size': 256, 'learning_rate_actor': 0.0003, 'learning_rate_critic': 0.001, 'ignore_done': False, 'target_theta': 0.005, 'discount_factor': 0.99, 'actor_update_freq': 2, 'noise': True, 'noise_sigma': 0.1, 'noise_range': { 'min': -0.5, 'max': 0.5 } }, 'collect': { 'collector': {}, 'unroll_len': 1, 'noise_sigma': 0.1, 'n_sample': 256 }, 'eval': { 'evaluator': { 'eval_freq': 1000, 'render': { 'render_freq': -1, 'mode': 'train_iter' }, 'figure_path': None, 'cfg_type': 'InteractionSerialEvaluatorDict', 'stop_value': 240, 'n_episode': 8 } }, 'other': { 'replay_buffer': { 'replay_buffer_size': 100000 } }, 'on_policy': False, 'cuda': True, 'multi_gpu': False, 'bp_update_sync': True, 'traj_len_inf': False, 'type': 'td3', 'priority': False, 'priority_IS_weight': False, 'random_collect_size': 10000, 'transition_with_policy_data': False, 'action_space': 'continuous', 'reward_batch_norm': False, 'multi_agent': False, 'cfg_type': 'TD3PolicyDict' }, 'exp_name': 'LunarLander-v2-TD3', 'seed': 0, 'wandb_logger': { 'gradient_logger': True, 'video_logger': True, 'plot_logger': True, 'action_logger': True, 'return_logger': False } } ``` </details> **Training Procedure** <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/LunarLander-v2-TD3) ## Model Information <!-- Provide the basic links for the model. --> - **Github Repository:** [repo link](https://github.com/opendilab/DI-engine) - **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/td3.html) - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-TD3/blob/main/policy_config.py) - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/LunarLander-v2-TD3/blob/main/replay.mp4) <!-- Provide the size information for the model. --> - **Parameters total size:** 115.03 KB - **Last Update Date:** 2023-09-22 ## Environments <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> - **Benchmark:** OpenAI/Gym/Box2d - **Task:** LunarLander-v2 - **Gym version:** 0.25.1 - **DI-engine version:** v0.4.9 - **PyTorch version:** 2.0.1+cu117 - **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/lunarlander.html)
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_ent_75_gates
jordyvl
2023-09-22T22:03:49Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T14:07:07Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_ent_75_gates results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_ent_75_gates This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9971 - Accuracy: 0.7275 - Exit 0 Accuracy: 0.0625 - Exit 1 Accuracy: 0.055 - Exit 2 Accuracy: 0.0525 - Exit 3 Accuracy: 0.0625 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 12 - total_train_batch_size: 240 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.9 | 3 | 2.7374 | 0.1225 | 0.065 | 0.065 | 0.0825 | 0.0625 | 0.0625 | | No log | 1.9 | 6 | 2.6769 | 0.1525 | 0.0625 | 0.06 | 0.09 | 0.0625 | 0.0625 | | No log | 2.9 | 9 | 2.6398 | 0.1925 | 0.0625 | 0.0575 | 0.0875 | 0.0625 | 0.0625 | | No log | 3.9 | 12 | 2.6077 | 0.21 | 0.065 | 0.0525 | 0.085 | 0.0625 | 0.0625 | | No log | 4.9 | 15 | 2.5432 | 0.235 | 0.065 | 0.05 | 0.085 | 0.0625 | 0.0625 | | No log | 5.9 | 18 | 2.4882 | 0.2525 | 0.065 | 0.05 | 0.085 | 0.0625 | 0.0625 | | No log | 6.9 | 21 | 2.4322 | 0.27 | 0.065 | 0.04 | 0.085 | 0.0625 | 0.0625 | | No log | 7.9 | 24 | 2.3583 | 0.2925 | 0.065 | 0.0425 | 0.09 | 0.0625 | 0.0625 | | No log | 8.9 | 27 | 2.3032 | 0.3175 | 0.065 | 0.04 | 0.0925 | 0.0625 | 0.0625 | | No log | 9.9 | 30 | 2.2374 | 0.34 | 0.0675 | 0.04 | 0.0825 | 0.0625 | 0.0625 | | No log | 10.9 | 33 | 2.1630 | 0.3775 | 0.0675 | 0.0425 | 0.0775 | 0.0625 | 0.0625 | | No log | 11.9 | 36 | 2.0998 | 0.395 | 0.0675 | 0.045 | 0.07 | 0.0625 | 0.0625 | | No log | 12.9 | 39 | 2.0223 | 0.43 | 0.0675 | 0.045 | 0.0725 | 0.0625 | 0.0625 | | No log | 13.9 | 42 | 1.9143 | 0.4825 | 0.065 | 0.05 | 0.07 | 0.0625 | 0.0625 | | No log | 14.9 | 45 | 1.8329 | 0.5275 | 0.065 | 0.05 | 0.0725 | 0.0625 | 0.0625 | | No log | 15.9 | 48 | 1.7532 | 0.545 | 0.0625 | 0.05 | 0.065 | 0.0625 | 0.0625 | | No log | 16.9 | 51 | 1.6447 | 0.58 | 0.0625 | 0.05 | 0.065 | 0.0625 | 0.0625 | | No log | 17.9 | 54 | 1.5863 | 0.59 | 0.065 | 0.0475 | 0.065 | 0.0625 | 0.0625 | | No log | 18.9 | 57 | 1.5150 | 0.62 | 0.065 | 0.0475 | 0.0725 | 0.0625 | 0.0625 | | No log | 19.9 | 60 | 1.4389 | 0.6325 | 0.065 | 0.0475 | 0.0725 | 0.0625 | 0.0625 | | No log | 20.9 | 63 | 1.3834 | 0.6575 | 0.0625 | 0.0475 | 0.0725 | 0.0625 | 0.0625 | | No log | 21.9 | 66 | 1.3297 | 0.675 | 0.0625 | 0.0475 | 0.075 | 0.0625 | 0.0625 | | No log | 22.9 | 69 | 1.2805 | 0.6775 | 0.0625 | 0.0475 | 0.075 | 0.0625 | 0.0625 | | No log | 23.9 | 72 | 1.2448 | 0.6825 | 0.0625 | 0.05 | 0.07 | 0.0625 | 0.0625 | | No log | 24.9 | 75 | 1.1965 | 0.695 | 0.0625 | 0.05 | 0.0675 | 0.0625 | 0.0625 | | No log | 25.9 | 78 | 1.1793 | 0.6925 | 0.0625 | 0.0525 | 0.0675 | 0.0625 | 0.0625 | | No log | 26.9 | 81 | 1.1513 | 0.7 | 0.0625 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 27.9 | 84 | 1.1275 | 0.6975 | 0.0625 | 0.055 | 0.07 | 0.0625 | 0.0625 | | No log | 28.9 | 87 | 1.0880 | 0.7225 | 0.0625 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 29.9 | 90 | 1.0749 | 0.7175 | 0.065 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 30.9 | 93 | 1.0716 | 0.7175 | 0.0625 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 31.9 | 96 | 1.0496 | 0.72 | 0.0625 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 32.9 | 99 | 1.0253 | 0.7275 | 0.065 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 33.9 | 102 | 1.0305 | 0.7175 | 0.065 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 34.9 | 105 | 1.0273 | 0.73 | 0.065 | 0.055 | 0.0675 | 0.0625 | 0.0625 | | No log | 35.9 | 108 | 1.0072 | 0.73 | 0.065 | 0.055 | 0.07 | 0.0625 | 0.0625 | | No log | 36.9 | 111 | 1.0183 | 0.7225 | 0.0625 | 0.055 | 0.07 | 0.0625 | 0.0625 | | No log | 37.9 | 114 | 1.0079 | 0.725 | 0.0625 | 0.055 | 0.065 | 0.0625 | 0.0625 | | No log | 38.9 | 117 | 1.0086 | 0.7375 | 0.0625 | 0.055 | 0.065 | 0.0625 | 0.0625 | | No log | 39.9 | 120 | 0.9931 | 0.72 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 40.9 | 123 | 0.9968 | 0.7325 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 41.9 | 126 | 0.9848 | 0.73 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 42.9 | 129 | 0.9983 | 0.7325 | 0.06 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 43.9 | 132 | 0.9988 | 0.725 | 0.06 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 44.9 | 135 | 0.9941 | 0.7275 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 45.9 | 138 | 1.0053 | 0.72 | 0.0625 | 0.055 | 0.065 | 0.0625 | 0.0625 | | No log | 46.9 | 141 | 0.9904 | 0.725 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 47.9 | 144 | 0.9907 | 0.7225 | 0.0625 | 0.055 | 0.0625 | 0.0625 | 0.0625 | | No log | 48.9 | 147 | 0.9883 | 0.7325 | 0.06 | 0.055 | 0.06 | 0.0625 | 0.0625 | | No log | 49.9 | 150 | 0.9927 | 0.7325 | 0.0625 | 0.055 | 0.0575 | 0.0625 | 0.0625 | | No log | 50.9 | 153 | 0.9981 | 0.73 | 0.0625 | 0.055 | 0.055 | 0.0625 | 0.0625 | | No log | 51.9 | 156 | 0.9997 | 0.73 | 0.0625 | 0.055 | 0.055 | 0.0625 | 0.0625 | | No log | 52.9 | 159 | 0.9899 | 0.73 | 0.0575 | 0.055 | 0.055 | 0.0625 | 0.0625 | | No log | 53.9 | 162 | 0.9910 | 0.7275 | 0.0575 | 0.055 | 0.055 | 0.0625 | 0.0625 | | No log | 54.9 | 165 | 1.0002 | 0.725 | 0.0575 | 0.055 | 0.055 | 0.0625 | 0.0625 | | No log | 55.9 | 168 | 1.0044 | 0.7325 | 0.0575 | 0.055 | 0.0525 | 0.0625 | 0.0625 | | No log | 56.9 | 171 | 0.9977 | 0.73 | 0.0575 | 0.055 | 0.05 | 0.0625 | 0.0625 | | No log | 57.9 | 174 | 0.9960 | 0.73 | 0.0575 | 0.055 | 0.0525 | 0.0625 | 0.0625 | | No log | 58.9 | 177 | 0.9964 | 0.7275 | 0.06 | 0.055 | 0.0525 | 0.0625 | 0.0625 | | No log | 59.9 | 180 | 0.9971 | 0.7275 | 0.0625 | 0.055 | 0.0525 | 0.0625 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
totally-not-an-llm/EverythingLM-13b-V3-peft
totally-not-an-llm
2023-09-22T21:35:17Z
8
1
peft
[ "peft", "llama", "base_model:NousResearch/Llama-2-13b-hf", "base_model:adapter:NousResearch/Llama-2-13b-hf", "4-bit", "bitsandbytes", "region:us" ]
null
2023-09-21T03:28:45Z
--- library_name: peft base_model: NousResearch/Llama-2-13b-hf --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
tlano/Tora-NijiFurry-LoRA-XL
tlano
2023-09-22T21:34:44Z
0
7
null
[ "stable-diffusion", "text-to-image", "license:openrail++", "region:us" ]
text-to-image
2023-07-28T09:15:28Z
--- license: openrail++ pipeline_tag: text-to-image tags: - stable-diffusion --- # 説明 / Description **Training Model:**<br> &emsp;SDXL1.0<br> **Trigger Words:**<br> &emsp;furry<br> <br> # ライセンス / License [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md) <br> # おまけ kohya_ss GUI用の学習設定ファイル<br> ToraFurry_XL_p1-v1.json<br> <br> **作者**<br> &emsp;twitter: [@TlanoAI](https://twitter.com/TlanoAI)<br> <br>
Panchovix/Marcoroni-70B-v1-4.65bpw-h6-exl2
Panchovix
2023-09-22T21:32:05Z
6
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-22T20:54:23Z
--- license: other --- 4.65 bit/bpw quantization of Marcoroni-70B-v1 (https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1) using exllamav2.
Medissa/my_awesome_eli5_mlm_model
Medissa
2023-09-22T21:29:48Z
162
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-18T15:46:46Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: distilroberta-base model-index: - name: my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2633 | 1.0 | 1134 | 2.0191 | | 2.1646 | 2.0 | 2268 | 1.9993 | | 2.1182 | 3.0 | 3402 | 1.9892 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1 - Datasets 2.9.0 - Tokenizers 0.11.0
Medissa/xlm-roberta-base-finetuned-panx-de
Medissa
2023-09-22T21:29:07Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-19T14:40:31Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 base_model: xlm-roberta-base model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: type: token-classification name: Token Classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - type: f1 value: 0.8638300289723342 name: F1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1358 - F1: 0.8638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 | | 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 | | 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 | ### Framework versions - Transformers 4.27.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Ori/lama-2-13b-peft-strategyqa-with-retrieval-at-1-v2-seed-3
Ori
2023-09-22T21:09:25Z
0
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-09-22T08:03:53Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
sujitrect/llama2-qlora-finetuned-french
sujitrect
2023-09-22T20:53:07Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-22T20:52:59Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
CyberHarem/aikawa_chinatsu_idolmastercinderellagirls
CyberHarem
2023-09-22T20:47:15Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/aikawa_chinatsu_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T20:38:09Z
--- license: mit datasets: - CyberHarem/aikawa_chinatsu_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of aikawa_chinatsu_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4080, you need to download `4080/aikawa_chinatsu_idolmastercinderellagirls.pt` as the embedding and `4080/aikawa_chinatsu_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4080**, with the score of 0.921. The trigger words are: 1. `aikawa_chinatsu_idolmastercinderellagirls` 2. `short_hair, glasses, brown_hair, brown_eyes, red-framed_eyewear, jewelry` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.921 | [Download](5100/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.912 | [Download](4760/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.838 | [Download](4420/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | **4080** | **0.921** | [**Download**](4080/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.880 | [Download](3740/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.884 | [Download](3400/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.886 | [Download](3060/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.816 | [Download](2720/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.896 | [Download](2380/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.879 | [Download](2040/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.884 | [Download](1700/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.802 | [Download](1360/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.843 | [Download](1020/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.832 | [Download](680/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.750 | [Download](340/aikawa_chinatsu_idolmastercinderellagirls.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
keikofujii/distilbert-base-uncased-finetuned-stsb
keikofujii
2023-09-22T20:41:45Z
113
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T20:11:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: distilbert-base-uncased-finetuned-stsb results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: stsb split: validation args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8679004942016133 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-stsb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5563 - Pearson: 0.8708 - Spearmanr: 0.8679 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | No log | 1.0 | 360 | 0.6167 | 0.8601 | 0.8577 | | 1.0026 | 2.0 | 720 | 0.6175 | 0.8670 | 0.8653 | | 0.3862 | 3.0 | 1080 | 0.6439 | 0.8703 | 0.8675 | | 0.3862 | 4.0 | 1440 | 0.5563 | 0.8708 | 0.8679 | | 0.2514 | 5.0 | 1800 | 0.5616 | 0.8694 | 0.8659 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
YassineBenlaria/testing_pretrained_niger_mali
YassineBenlaria
2023-09-22T20:41:39Z
108
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-22T17:26:37Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: testing_pretrained_niger_mali results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testing_pretrained_niger_mali This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9245 - Wer: 0.8889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 350 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 6.427 | 35.29 | 300 | 2.9588 | 1.0 | | 2.8653 | 70.59 | 600 | 2.7466 | 1.0 | | 2.7675 | 105.88 | 900 | 2.7207 | 1.0 | | 2.6674 | 141.18 | 1200 | 2.2285 | 1.0 | | 1.7813 | 176.47 | 1500 | 1.5717 | 0.8852 | | 1.0447 | 211.76 | 1800 | 1.7009 | 0.8778 | | 0.8167 | 247.06 | 2100 | 1.8010 | 0.8815 | | 0.7059 | 282.35 | 2400 | 1.8748 | 0.8815 | | 0.6572 | 317.65 | 2700 | 1.9245 | 0.8889 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
kmaksatk/controlnet_80k_data_blip
kmaksatk
2023-09-22T20:18:20Z
2
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-09-21T06:40:04Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-kmaksatk/controlnet_80k_data_blip These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images below. prompt: High qualityphoto of male gymnast in a blue sport outfit in the olympic game ![images_0)](./images_0.png) prompt: High qualityphoto of male gymnast in a blue sport outfit in the olympic game ![images_1)](./images_1.png) prompt: High qualityphoto of male gymnast in a blue sport outfit in the olympic game ![images_2)](./images_2.png)
CyberHarem/momoi_azuki_idolmastercinderellagirls
CyberHarem
2023-09-22T19:56:22Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/momoi_azuki_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T19:43:43Z
--- license: mit datasets: - CyberHarem/momoi_azuki_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of momoi_azuki_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3060, you need to download `3060/momoi_azuki_idolmastercinderellagirls.pt` as the embedding and `3060/momoi_azuki_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3060**, with the score of 0.937. The trigger words are: 1. `momoi_azuki_idolmastercinderellagirls` 2. `brown_eyes, black_hair, smile, breasts, blush, hair_ornament, open_mouth, flower, hair_flower, single_hair_bun, hair_bun, brown_hair, long_hair, medium_breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:---------------------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.863 | [Download](5100/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | [<NSFW, click to see>](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | [<NSFW, click to see>](5100/previews/pattern_6.png) | [<NSFW, click to see>](5100/previews/pattern_7.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.852 | [Download](4760/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | [<NSFW, click to see>](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | [<NSFW, click to see>](4760/previews/pattern_6.png) | [<NSFW, click to see>](4760/previews/pattern_7.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.800 | [Download](4420/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | [<NSFW, click to see>](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | [<NSFW, click to see>](4420/previews/pattern_6.png) | [<NSFW, click to see>](4420/previews/pattern_7.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.897 | [Download](4080/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | [<NSFW, click to see>](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | [<NSFW, click to see>](4080/previews/pattern_6.png) | [<NSFW, click to see>](4080/previews/pattern_7.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.866 | [Download](3740/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | [<NSFW, click to see>](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | [<NSFW, click to see>](3740/previews/pattern_6.png) | [<NSFW, click to see>](3740/previews/pattern_7.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.874 | [Download](3400/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | [<NSFW, click to see>](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | [<NSFW, click to see>](3400/previews/pattern_6.png) | [<NSFW, click to see>](3400/previews/pattern_7.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | **3060** | **0.937** | [**Download**](3060/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | [<NSFW, click to see>](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | [<NSFW, click to see>](3060/previews/pattern_6.png) | [<NSFW, click to see>](3060/previews/pattern_7.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.873 | [Download](2720/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | [<NSFW, click to see>](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | [<NSFW, click to see>](2720/previews/pattern_6.png) | [<NSFW, click to see>](2720/previews/pattern_7.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.858 | [Download](2380/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | [<NSFW, click to see>](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | [<NSFW, click to see>](2380/previews/pattern_6.png) | [<NSFW, click to see>](2380/previews/pattern_7.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.877 | [Download](2040/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | [<NSFW, click to see>](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | [<NSFW, click to see>](2040/previews/pattern_6.png) | [<NSFW, click to see>](2040/previews/pattern_7.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.872 | [Download](1700/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | [<NSFW, click to see>](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | [<NSFW, click to see>](1700/previews/pattern_6.png) | [<NSFW, click to see>](1700/previews/pattern_7.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.901 | [Download](1360/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | [<NSFW, click to see>](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | [<NSFW, click to see>](1360/previews/pattern_6.png) | [<NSFW, click to see>](1360/previews/pattern_7.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.896 | [Download](1020/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | [<NSFW, click to see>](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | [<NSFW, click to see>](1020/previews/pattern_6.png) | [<NSFW, click to see>](1020/previews/pattern_7.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.877 | [Download](680/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | [<NSFW, click to see>](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | [<NSFW, click to see>](680/previews/pattern_6.png) | [<NSFW, click to see>](680/previews/pattern_7.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.786 | [Download](340/momoi_azuki_idolmastercinderellagirls.zip) | [<NSFW, click to see>](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | [<NSFW, click to see>](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | [<NSFW, click to see>](340/previews/pattern_6.png) | [<NSFW, click to see>](340/previews/pattern_7.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
UnstableLlama/Xwin-LM-13B-V0.1-4.65bpw-exl2
UnstableLlama
2023-09-22T19:54:24Z
94
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-22T19:27:28Z
--- base_model: https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1 inference: false license: llama2 model_creator: Xwin-LM model_name: Xwin-LM 13B V0.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: UnstableLlama --- --- 4.65bpw ExLlamaV2 quantization by UnstableLlama license: llama2 --- <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GTP4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | ## Inference ### Conversation templates To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vllm Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
eugene6/q-FrozenLake-v1-4x4-noSlippery
eugene6
2023-09-22T19:54:05Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T19:54:03Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="eugene6/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
DriveMyScream/News_Sentiment_Analysis_model
DriveMyScream
2023-09-22T19:44:17Z
0
0
keras
[ "keras", "tf-keras", "region:us" ]
null
2023-09-22T19:43:45Z
--- library_name: keras --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Amey-HH/ddpm-butterflies-128
Amey-HH
2023-09-22T19:15:14Z
34
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-09-22T18:33:33Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of Polyps. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Amey-HH/ddpm-butterflies-128') image = pipeline().images[0] image ```
CyberHarem/aino_nagisa_idolmastercinderellagirls
CyberHarem
2023-09-22T19:06:17Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/aino_nagisa_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T18:57:44Z
--- license: mit datasets: - CyberHarem/aino_nagisa_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of aino_nagisa_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/aino_nagisa_idolmastercinderellagirls.pt` as the embedding and `4760/aino_nagisa_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.957. The trigger words are: 1. `aino_nagisa_idolmastercinderellagirls` 2. `brown_hair, long_hair, ponytail, smile, brown_eyes, open_mouth, breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:---------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.954 | [Download](5100/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.957** | [**Download**](4760/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.908 | [Download](4420/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.952 | [Download](4080/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.938 | [Download](3740/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.902 | [Download](3400/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.870 | [Download](3060/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.867 | [Download](2720/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.858 | [Download](2380/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.871 | [Download](2040/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.827 | [Download](1700/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.573 | [Download](1360/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.677 | [Download](1020/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.400 | [Download](680/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.458 | [Download](340/aino_nagisa_idolmastercinderellagirls.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
przl/q-Taxi-v3
przl
2023-09-22T18:57:57Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T18:57:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="przl/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
santis2/gpt2-alpaca-instruction-fine-tuning-lora
santis2
2023-09-22T18:52:42Z
0
0
null
[ "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "region:us" ]
null
2023-09-22T15:35:13Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: gpt2-alpaca-instruction-fine-tuning-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-alpaca-instruction-fine-tuning-lora This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.118 | 0.11 | 1000 | 1.7807 | | 1.8878 | 0.22 | 2000 | 1.7477 | | 1.8609 | 0.33 | 3000 | 1.7318 | | 1.8489 | 0.44 | 4000 | 1.7207 | | 1.8416 | 0.55 | 5000 | 1.7134 | | 1.8181 | 0.66 | 6000 | 1.7082 | | 1.8144 | 0.77 | 7000 | 1.7021 | | 1.816 | 0.88 | 8000 | 1.6987 | | 1.7825 | 0.99 | 9000 | 1.6981 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
UnstableLlama/Xwin-LM-13B-V0.1-5bpw-exl2
UnstableLlama
2023-09-22T18:51:12Z
94
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-22T18:22:40Z
--- base_model: https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1 inference: false license: llama2 model_creator: Xwin-LM model_name: Xwin-LM 13B V0.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: UnstableLlama --- --- 5bpw ExLlamaV2 quantization by UnstableLlama license: llama2 --- <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GTP4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | ## Inference ### Conversation templates To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vllm Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
jmoney54378256438905/jondurbin_airoboros-c34b-2.2.1-4.65bpw
jmoney54378256438905
2023-09-22T18:48:51Z
5
0
transformers
[ "transformers", "llama", "text-generation", "dataset:jondurbin/airoboros-2.2.1", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-22T18:38:37Z
--- license: llama2 datasets: - jondurbin/airoboros-2.2.1 --- ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) This is essentially a minor "fix" branch of [airoboros-c34b-2.2](https://hf.co/jondurbin/airoboros-c34b-2.2) with a updates, primarily: - [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses) - [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks) - [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed) - [(less aggressive) de-censoring](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#de-censoring) - 5 epochs instead of 3 This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine tuning info https://wandb.ai/jondurbin/airoboros-c34b-2.2.1/runs/07b08z7m?workspace=user-jondurbin ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_11
TamerAbdelaziz
2023-09-22T18:45:47Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T17:59:17Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_11 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_11 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0094 - Validation Loss: 0.3097 - Train Accuracy: 0.9364 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2466 | 0.1804 | 0.9314 | 0 | | 0.1417 | 0.1824 | 0.9324 | 1 | | 0.0892 | 0.2208 | 0.9268 | 2 | | 0.0532 | 0.2212 | 0.9348 | 3 | | 0.0328 | 0.2518 | 0.936 | 4 | | 0.0218 | 0.2750 | 0.9358 | 5 | | 0.0185 | 0.2856 | 0.9338 | 6 | | 0.0147 | 0.3055 | 0.935 | 7 | | 0.0113 | 0.3060 | 0.9352 | 8 | | 0.0094 | 0.3097 | 0.9364 | 9 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.8.1 - Datasets 2.14.5 - Tokenizers 0.13.3
treei/llama-2-7b-keyword-ft
treei
2023-09-22T18:44:02Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-22T18:41:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0
thainq107/flan-t5-small-twitter-sentiment-analysis-lora
thainq107
2023-09-22T18:36:38Z
0
0
null
[ "pytorch", "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-09-22T17:02:51Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: flan-t5-small-twitter-sentiment-analysis-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-twitter-sentiment-analysis-lora This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2093 - Accuracy: 0.8115 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2573 | 1.0 | 938 | 0.2270 | 0.7941 | | 0.2369 | 2.0 | 1876 | 0.2176 | 0.8057 | | 0.2286 | 3.0 | 2814 | 0.2132 | 0.8091 | | 0.2227 | 4.0 | 3752 | 0.2089 | 0.8122 | | 0.2204 | 5.0 | 4690 | 0.2083 | 0.8141 | | 0.2191 | 6.0 | 5628 | 0.2077 | 0.8143 | | 0.2183 | 7.0 | 6566 | 0.2048 | 0.8169 | | 0.2155 | 8.0 | 7504 | 0.2066 | 0.8156 | | 0.2164 | 9.0 | 8442 | 0.2047 | 0.8174 | | 0.2167 | 10.0 | 9380 | 0.2041 | 0.8177 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1 - Datasets 2.9.0 - Tokenizers 0.13.3
Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2
Thireus
2023-09-22T18:33:52Z
9
0
transformers
[ "transformers", "llama", "text-generation", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
2023-09-17T19:08:07Z
--- inference: false license: llama2 model_creator: WizardLM model_link: https://huggingface.co/WizardLM/WizardLM-70B-V1.0 model_name: WizardLM 70B V1.0 model_type: llama quantized_by: Thireus --- # WizardLM 70B V1.0 – EXL2 - Model creator: [WizardLM](https://huggingface.co/WizardLM) - FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) – float32 - FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) - BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) ## Models available: | Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Base Model | Layers | VRAM Min | VRAM Max | PPL** | Comments&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | | ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- | ---- |------------------ | ------------------ | ------------------ | ---------------------------------------------------------------------------------- | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 39GB | 44GB | 4.15234375 | Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 40GB | 44GB | 4.1640625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 39GB | 44GB | 4.2421875 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h8-exl2/) | 4.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 39GB | 44GB | 4.24609375 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 47GB | 52GB | 4.06640625 | Best so far. Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2/) | 5.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.0625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 47GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 56GB | 60GB | 4.0703125 | Model suffers from poor prompt understanding and logic is affected | \* wikitext-2-raw-v1 \*\* Evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 0). For reference, [TheBloke_WizardLM-70B-V1.0-GPTQ_gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) has a score of 4.1015625 in perplexity. ## Description: _This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._ EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. ## Prompt template (official): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Prompt template (suggested): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Quantization process: | Original Model | → | (optional) float16 or bfloat16 Model* | → | Safetensors Model** | → | EXL2 Model | | -------------- | --- | ------------- | --- | ---------------- | --- | ---------- | | [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | → | [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF)* | → | Safetensors** | → | EXL2 | Example to convert WizardLM-70B-V1.0-HF to EXL2 4.0 bpw with 6-bit head: ``` mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6 ``` \* Use the following script to convert your local pytorch_model bin files to float16 (you can also choose bfloat16) + safetensors all in one go: - https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py (best for sharding and float16/FP16 or bfloat16/BF16 conversion) Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards: ``` python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB ``` Use `--bf16` if you'd like to try bfloat16 instead, but note that there are concerns about quantization quality – https://github.com/turboderp/exllamav2/issues/30#issuecomment-1719009289 \*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors: - https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2) - https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py (recommended) - https://gist.github.com/epicfilemcnulty/1f55fd96b08f8d4d6693293e37b4c55e#file-2safetensors-py ## Further reading: - https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html
Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2
Thireus
2023-09-22T18:33:35Z
12
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-21T03:52:37Z
--- inference: false license: llama2 model_creator: WizardLM model_link: https://huggingface.co/WizardLM/WizardLM-70B-V1.0 model_name: WizardLM 70B V1.0 model_type: llama quantized_by: Thireus --- # WizardLM 70B V1.0 – EXL2 - Model creator: [WizardLM](https://huggingface.co/WizardLM) - FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) – float32 - FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) - BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) ## Models available: | Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Base Model | Layers | VRAM Min | VRAM Max | PPL** | Comments&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | | ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- | ---- |------------------ | ------------------ | ------------------ | ---------------------------------------------------------------------------------- | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 39GB | 44GB | 4.15234375 | Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 40GB | 44GB | 4.1640625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 39GB | 44GB | 4.2421875 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h8-exl2/) | 4.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 39GB | 44GB | 4.24609375 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 47GB | 52GB | 4.06640625 | Best so far. Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2/) | 5.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.0625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 47GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 56GB | 60GB | 4.0703125 | Model suffers from poor prompt understanding and logic is affected | \* wikitext-2-raw-v1 \*\* Evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 0). For reference, [TheBloke_WizardLM-70B-V1.0-GPTQ_gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) has a score of 4.1015625 in perplexity. ## Description: _This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._ EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. ## Prompt template (official): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Prompt template (suggested): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Quantization process: | Original Model | → | (optional) float16 or bfloat16 Model* | → | Safetensors Model** | → | EXL2 Model | | -------------- | --- | ------------- | --- | ---------------- | --- | ---------- | | [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | → | [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF)* | → | Safetensors** | → | EXL2 | Example to convert WizardLM-70B-V1.0-HF to EXL2 4.0 bpw with 6-bit head: ``` mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6 ``` \* Use the following script to convert your local pytorch_model bin files to float16 (you can also choose bfloat16) + safetensors all in one go: - https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py (best for sharding and float16/FP16 or bfloat16/BF16 conversion) Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards: ``` python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB ``` Use `--bf16` if you'd like to try bfloat16 instead, but note that there are concerns about quantization quality – https://github.com/turboderp/exllamav2/issues/30#issuecomment-1719009289 \*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors: - https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2) - https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py (recommended) - https://gist.github.com/epicfilemcnulty/1f55fd96b08f8d4d6693293e37b4c55e#file-2safetensors-py ## Further reading: - https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html
Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2
Thireus
2023-09-22T18:33:24Z
5
0
transformers
[ "transformers", "llama", "text-generation", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
2023-09-19T08:09:31Z
--- inference: false license: llama2 model_creator: WizardLM model_link: https://huggingface.co/WizardLM/WizardLM-70B-V1.0 model_name: WizardLM 70B V1.0 model_type: llama quantized_by: Thireus --- # WizardLM 70B V1.0 – EXL2 - Model creator: [WizardLM](https://huggingface.co/WizardLM) - FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) – float32 - FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) - BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) ## Models available: | Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Base Model | Layers | VRAM Min | VRAM Max | PPL** | Comments&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | | ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- | ---- |------------------ | ------------------ | ------------------ | ---------------------------------------------------------------------------------- | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 39GB | 44GB | 4.15234375 | Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 40GB | 44GB | 4.1640625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 39GB | 44GB | 4.2421875 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h8-exl2/) | 4.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 39GB | 44GB | 4.24609375 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 47GB | 52GB | 4.06640625 | Best so far. Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2/) | 5.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.0625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 47GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 56GB | 60GB | 4.0703125 | Model suffers from poor prompt understanding and logic is affected | \* wikitext-2-raw-v1 \*\* Evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 0). For reference, [TheBloke_WizardLM-70B-V1.0-GPTQ_gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) has a score of 4.1015625 in perplexity. ## Description: _This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._ EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. ## Prompt template (official): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Prompt template (suggested): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Quantization process: | Original Model | → | (optional) float16 or bfloat16 Model* | → | Safetensors Model** | → | EXL2 Model | | -------------- | --- | ------------- | --- | ---------------- | --- | ---------- | | [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | → | [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF)* | → | Safetensors** | → | EXL2 | Example to convert WizardLM-70B-V1.0-HF to EXL2 4.0 bpw with 6-bit head: ``` mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6 ``` \* Use the following script to convert your local pytorch_model bin files to float16 (you can also choose bfloat16) + safetensors all in one go: - https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py (best for sharding and float16/FP16 or bfloat16/BF16 conversion) Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards: ``` python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB ``` Use `--bf16` if you'd like to try bfloat16 instead, but note that there are concerns about quantization quality – https://github.com/turboderp/exllamav2/issues/30#issuecomment-1719009289 \*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors: - https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2) - https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py (recommended) - https://gist.github.com/epicfilemcnulty/1f55fd96b08f8d4d6693293e37b4c55e#file-2safetensors-py ## Further reading: - https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html
alpindale/Llama-2-13b-ONNX
alpindale
2023-09-22T18:33:20Z
0
5
null
[ "onnx", "text generation", "text-generation", "en", "license:llama2", "region:us" ]
text-generation
2023-09-21T20:28:10Z
--- language: - en thumbnail: null tags: - text generation pipeline_tag: text-generation inference: false license: llama2 --- # Llama-2 ONNX This repository contains optimized version of Llama-2 13B. ## Downloading the model You can use `huggingface_hub` to download this repository. This can be done through both python scripting and the commandline. Refer to the [HuggingFace Hub Documentation](https://huggingface.co/docs/huggingface_hub/guides/download) for the Python examples. With CLI: 1. Make sure you have an updated `huggingface_hub` installed. ```sh pip install -U huggingface_hub ``` 2. Download the repository. ```sh huggingface-cli download alpindale/Llama-2-13b-ONNX --repo-type model --cache-dir /path/to/custom/cache/directory --local-dir /path/to/download/dir --local-dir-use-symlinks False ``` The `--cache-dir` kwarg is only necessary if your default cache directory (`~/.cache`) does not have enough disk space to accomodate the entire repository. ## Using the model The repository provides example code for running the models. ```sh python llama2_onnx_inference.py --onnx_file FP16/LlamaV2_13B_float16.onnx --embedding_file embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?" ``` Output: ``` The lightest element is hydrogen. Hydrogen is the lightest element on the periodic table, with an atomic mass of 1.00794 u (unified atomic mass units). ``` ### Chat Interface Alternatively, you can use the Gradio chat interface to run the models. First, install the required packages: ```sh pip install -r requirements.txt ``` Set the Python path to the root directory of the repository (necessary for importing the required modules): ```sh export PYTHONPATH=$PYTHONPATH:$(pwd) ``` Then you can simply run: ```sh python ChatApp/app.py ``` You can then navigate to [http://localhost:7860](https://127.0.0.1:7860) on your browser to access the interface. ## FAQ ### Why is the first inference session slow? ONNX runtime execution provider might need to generate JIT binaries for the underlying hardware, typically the binary is cache and will be loaded directly in the subsequent runs to reduce the overhead. ### Why is FP16 slower than FP32 on my device? Your device may not support native FP16 math, therefore weights will be cast to FP32 at runtime. Using the FP32 version of the model will avoid the cast overhead. ### How do I optimize inference? It's recommended that inputs/outputs are put on target device to avoid expensive data copies, please refer to the following documentations for details: [I/O Binding | onnxruntime](https://onnxruntime.ai/docs/performance/tune-performance/iobinding.html) ### What generation parameters should I use the model with? You can perform temperature and top-p sampling with the provided example code. Please refer to Meta's example [here](https://github.com/facebookresearch/llama/).
Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2
Thireus
2023-09-22T18:33:08Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-21T03:52:35Z
--- inference: false license: llama2 model_creator: WizardLM model_link: https://huggingface.co/WizardLM/WizardLM-70B-V1.0 model_name: WizardLM 70B V1.0 model_type: llama quantized_by: Thireus --- # WizardLM 70B V1.0 – EXL2 - Model creator: [WizardLM](https://huggingface.co/WizardLM) - FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) – float32 - FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) – float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) - BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) – bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) ## Models available: | Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Base Model | Layers | VRAM Min | VRAM Max | PPL** | Comments&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | | ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- | ---- |------------------ | ------------------ | ------------------ | ---------------------------------------------------------------------------------- | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 39GB | 44GB | 4.15234375 | Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 40GB | 44GB | 4.1640625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 39GB | 44GB | 4.2421875 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h8-exl2/) | 4.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 39GB | 44GB | 4.24609375 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 47GB | 52GB | 4.06640625 | Best so far. Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2/) | 5.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.0625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 47GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 56GB | 60GB | 4.0703125 | Model suffers from poor prompt understanding and logic is affected | \* wikitext-2-raw-v1 \*\* Evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 0). For reference, [TheBloke_WizardLM-70B-V1.0-GPTQ_gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) has a score of 4.1015625 in perplexity. ## Description: _This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._ EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. ## Prompt template (official): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Prompt template (suggested): ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` ## Quantization process: | Original Model | → | (optional) float16 or bfloat16 Model* | → | Safetensors Model** | → | EXL2 Model | | -------------- | --- | ------------- | --- | ---------------- | --- | ---------- | | [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | → | [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF)* | → | Safetensors** | → | EXL2 | Example to convert WizardLM-70B-V1.0-HF to EXL2 4.0 bpw with 6-bit head: ``` mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6 ``` \* Use the following script to convert your local pytorch_model bin files to float16 (you can also choose bfloat16) + safetensors all in one go: - https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py (best for sharding and float16/FP16 or bfloat16/BF16 conversion) Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards: ``` python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB ``` Use `--bf16` if you'd like to try bfloat16 instead, but note that there are concerns about quantization quality – https://github.com/turboderp/exllamav2/issues/30#issuecomment-1719009289 \*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors: - https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2) - https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py (recommended) - https://gist.github.com/epicfilemcnulty/1f55fd96b08f8d4d6693293e37b4c55e#file-2safetensors-py ## Further reading: - https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html
sachithgunasekara/open-llama-v2-lamini-orca-evol-guanaco-qlora-checkpoint
sachithgunasekara
2023-09-22T18:25:14Z
2
0
peft
[ "peft", "region:us" ]
null
2023-09-22T18:25:09Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
LucaAsga/ppo-LunarLander-v2-NEW
LucaAsga
2023-09-22T18:22:03Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T18:21:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 274.11 +/- 17.24 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
0ppxnhximxr/distilbert-base-uncased-finetuned-emotion
0ppxnhximxr
2023-09-22T18:14:16Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T18:11:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9259596663626415 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2200 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.836 | 1.0 | 250 | 0.3195 | 0.906 | 0.9053 | | 0.2573 | 2.0 | 500 | 0.2200 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
AIYIYA/my_html3
AIYIYA
2023-09-22T18:03:14Z
61
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-chinese", "base_model:finetune:google-bert/bert-base-chinese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T17:50:11Z
--- base_model: bert-base-chinese tags: - generated_from_keras_callback model-index: - name: AIYIYA/my_html3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AIYIYA/my_html3 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1064 - Train Accuracy: 1.0 - Validation Loss: 0.1251 - Validation Accuracy: 0.9804 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.8291 | 0.7386 | 0.2926 | 0.9804 | 0 | | 0.2239 | 0.9804 | 0.1478 | 0.9804 | 1 | | 0.1064 | 1.0 | 0.1251 | 0.9804 | 2 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3
UnstableLlama/Xwin-LM-7B-V0.1-4bpw-exl2
UnstableLlama
2023-09-22T18:00:50Z
79
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-22T17:43:37Z
--- base_model: https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1 inference: false license: llama2 model_creator: Xwin-LM model_name: Xwin-LM 7B V0.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: UnstableLlama --- --- 4bpw ExLlamaV2 quantization by UnstableLlama license: llama2 --- <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GTP4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | ## Inference ### Conversation templates To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vllm Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
TieIncred/cloth2
TieIncred
2023-09-22T17:58:39Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-08T13:22:15Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of checkodsasdass man tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - TieIncred/cloth2 This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of checkodsasdass man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True.
kupru/ppo-Pyramids
kupru
2023-09-22T17:43:02Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-09-22T17:42:56Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: kupru/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_10
TamerAbdelaziz
2023-09-22T17:41:24Z
63
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T11:03:52Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_10 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_10 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0067 - Validation Loss: 0.3379 - Train Accuracy: 0.9354 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 12500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2338 | 0.1838 | 0.9304 | 0 | | 0.1379 | 0.1821 | 0.931 | 1 | | 0.0748 | 0.1896 | 0.9334 | 2 | | 0.0427 | 0.2671 | 0.9318 | 3 | | 0.0260 | 0.2814 | 0.933 | 4 | | 0.0198 | 0.3015 | 0.9338 | 5 | | 0.0134 | 0.3239 | 0.9342 | 6 | | 0.0104 | 0.3797 | 0.9268 | 7 | | 0.0088 | 0.3347 | 0.9338 | 8 | | 0.0067 | 0.3379 | 0.9354 | 9 | ### Framework versions - Transformers 4.33.2 - TensorFlow 2.8.1 - Datasets 2.14.5 - Tokenizers 0.13.3
EdBianchi/vit-fire-detection
EdBianchi
2023-09-22T17:17:15Z
1,621
6
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-12-29T15:41:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall base_model: google/vit-base-patch16-224-in21k model-index: - name: vit-fire-detection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-fire-detection This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0126 - Precision: 0.9960 - Recall: 0.9960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.1018 | 1.0 | 190 | 0.0375 | 0.9934 | 0.9934 | | 0.0484 | 2.0 | 380 | 0.0167 | 0.9961 | 0.9960 | | 0.0357 | 3.0 | 570 | 0.0253 | 0.9948 | 0.9947 | | 0.0133 | 4.0 | 760 | 0.0198 | 0.9961 | 0.9960 | | 0.012 | 5.0 | 950 | 0.0203 | 0.9947 | 0.9947 | | 0.0139 | 6.0 | 1140 | 0.0204 | 0.9947 | 0.9947 | | 0.0076 | 7.0 | 1330 | 0.0175 | 0.9961 | 0.9960 | | 0.0098 | 8.0 | 1520 | 0.0115 | 0.9974 | 0.9974 | | 0.0062 | 9.0 | 1710 | 0.0133 | 0.9960 | 0.9960 | | 0.0012 | 10.0 | 1900 | 0.0126 | 0.9960 | 0.9960 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.14.0.dev20221111 - Datasets 2.8.0 - Tokenizers 0.12.1
MattStammers/appo-atari-asteroids
MattStammers
2023-09-22T17:11:19Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T17:11:13Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_asteroid type: atari_asteroid metrics: - type: mean_reward value: 1388.00 +/- 437.35 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_asteroid** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r MattStammers/appo-atari-asteroids ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m sf_examples.atari.enjoy_atari --algo=APPO --env=atari_asteroid --train_dir=./train_dir --experiment=appo-atari-asteroids ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m sf_examples.atari.train_atari --algo=APPO --env=atari_asteroid --train_dir=./train_dir --experiment=appo-atari-asteroids --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Venkatesh4342/distilbert-helpdesk-sentence-sentiment
Venkatesh4342
2023-09-22T17:05:07Z
129
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T17:04:36Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-helpdesk-sentence-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-helpdesk-sentence-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1746 - Accuracy: 0.9697 - F1: 0.9697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.05 | 100 | 0.3796 | 0.8636 | 0.8604 | | No log | 2.11 | 200 | 0.1258 | 0.9697 | 0.9697 | | No log | 3.16 | 300 | 0.1814 | 0.9697 | 0.9697 | | 0.1951 | 4.21 | 400 | 0.1746 | 0.9697 | 0.9697 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
CyberHarem/asari_nanami_idolmastercinderellagirls
CyberHarem
2023-09-22T16:43:35Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/asari_nanami_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T16:29:14Z
--- license: mit datasets: - CyberHarem/asari_nanami_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of asari_nanami_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 2880, you need to download `2880/asari_nanami_idolmastercinderellagirls.pt` as the embedding and `2880/asari_nanami_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 2880**, with the score of 0.955. The trigger words are: 1. `asari_nanami_idolmastercinderellagirls` 2. `long_hair, blue_hair, hair_ornament, blue_eyes, bangs, fish_hair_ornament, blush, open_mouth, smile, hair_rings, teeth, :d, breasts, upper_teeth_only` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 7200 | 0.938 | [Download](7200/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-7200](7200/previews/pattern_1.png) | ![pattern_2-7200](7200/previews/pattern_2.png) | ![pattern_3-7200](7200/previews/pattern_3.png) | ![pattern_4-7200](7200/previews/pattern_4.png) | ![pattern_5-7200](7200/previews/pattern_5.png) | ![pattern_6-7200](7200/previews/pattern_6.png) | ![pattern_7-7200](7200/previews/pattern_7.png) | ![pattern_8-7200](7200/previews/pattern_8.png) | ![pattern_9-7200](7200/previews/pattern_9.png) | [<NSFW, click to see>](7200/previews/pattern_10.png) | ![pattern_11-7200](7200/previews/pattern_11.png) | ![bikini-7200](7200/previews/bikini.png) | [<NSFW, click to see>](7200/previews/bondage.png) | ![free-7200](7200/previews/free.png) | ![maid-7200](7200/previews/maid.png) | ![miko-7200](7200/previews/miko.png) | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) | ![suit-7200](7200/previews/suit.png) | ![yukata-7200](7200/previews/yukata.png) | | 6720 | 0.934 | [Download](6720/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-6720](6720/previews/pattern_1.png) | ![pattern_2-6720](6720/previews/pattern_2.png) | ![pattern_3-6720](6720/previews/pattern_3.png) | ![pattern_4-6720](6720/previews/pattern_4.png) | ![pattern_5-6720](6720/previews/pattern_5.png) | ![pattern_6-6720](6720/previews/pattern_6.png) | ![pattern_7-6720](6720/previews/pattern_7.png) | ![pattern_8-6720](6720/previews/pattern_8.png) | ![pattern_9-6720](6720/previews/pattern_9.png) | [<NSFW, click to see>](6720/previews/pattern_10.png) | ![pattern_11-6720](6720/previews/pattern_11.png) | ![bikini-6720](6720/previews/bikini.png) | [<NSFW, click to see>](6720/previews/bondage.png) | ![free-6720](6720/previews/free.png) | ![maid-6720](6720/previews/maid.png) | ![miko-6720](6720/previews/miko.png) | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) | ![suit-6720](6720/previews/suit.png) | ![yukata-6720](6720/previews/yukata.png) | | 6240 | 0.939 | [Download](6240/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-6240](6240/previews/pattern_1.png) | ![pattern_2-6240](6240/previews/pattern_2.png) | ![pattern_3-6240](6240/previews/pattern_3.png) | ![pattern_4-6240](6240/previews/pattern_4.png) | ![pattern_5-6240](6240/previews/pattern_5.png) | ![pattern_6-6240](6240/previews/pattern_6.png) | ![pattern_7-6240](6240/previews/pattern_7.png) | ![pattern_8-6240](6240/previews/pattern_8.png) | ![pattern_9-6240](6240/previews/pattern_9.png) | [<NSFW, click to see>](6240/previews/pattern_10.png) | ![pattern_11-6240](6240/previews/pattern_11.png) | ![bikini-6240](6240/previews/bikini.png) | [<NSFW, click to see>](6240/previews/bondage.png) | ![free-6240](6240/previews/free.png) | ![maid-6240](6240/previews/maid.png) | ![miko-6240](6240/previews/miko.png) | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) | ![suit-6240](6240/previews/suit.png) | ![yukata-6240](6240/previews/yukata.png) | | 5760 | 0.918 | [Download](5760/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-5760](5760/previews/pattern_1.png) | ![pattern_2-5760](5760/previews/pattern_2.png) | ![pattern_3-5760](5760/previews/pattern_3.png) | ![pattern_4-5760](5760/previews/pattern_4.png) | ![pattern_5-5760](5760/previews/pattern_5.png) | ![pattern_6-5760](5760/previews/pattern_6.png) | ![pattern_7-5760](5760/previews/pattern_7.png) | ![pattern_8-5760](5760/previews/pattern_8.png) | ![pattern_9-5760](5760/previews/pattern_9.png) | [<NSFW, click to see>](5760/previews/pattern_10.png) | ![pattern_11-5760](5760/previews/pattern_11.png) | ![bikini-5760](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) | ![free-5760](5760/previews/free.png) | ![maid-5760](5760/previews/maid.png) | ![miko-5760](5760/previews/miko.png) | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) | ![suit-5760](5760/previews/suit.png) | ![yukata-5760](5760/previews/yukata.png) | | 5280 | 0.911 | [Download](5280/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | ![pattern_6-5280](5280/previews/pattern_6.png) | ![pattern_7-5280](5280/previews/pattern_7.png) | ![pattern_8-5280](5280/previews/pattern_8.png) | ![pattern_9-5280](5280/previews/pattern_9.png) | [<NSFW, click to see>](5280/previews/pattern_10.png) | ![pattern_11-5280](5280/previews/pattern_11.png) | ![bikini-5280](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4800 | 0.928 | [Download](4800/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-4800](4800/previews/pattern_1.png) | ![pattern_2-4800](4800/previews/pattern_2.png) | ![pattern_3-4800](4800/previews/pattern_3.png) | ![pattern_4-4800](4800/previews/pattern_4.png) | ![pattern_5-4800](4800/previews/pattern_5.png) | ![pattern_6-4800](4800/previews/pattern_6.png) | ![pattern_7-4800](4800/previews/pattern_7.png) | ![pattern_8-4800](4800/previews/pattern_8.png) | ![pattern_9-4800](4800/previews/pattern_9.png) | [<NSFW, click to see>](4800/previews/pattern_10.png) | ![pattern_11-4800](4800/previews/pattern_11.png) | ![bikini-4800](4800/previews/bikini.png) | [<NSFW, click to see>](4800/previews/bondage.png) | ![free-4800](4800/previews/free.png) | ![maid-4800](4800/previews/maid.png) | ![miko-4800](4800/previews/miko.png) | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) | ![suit-4800](4800/previews/suit.png) | ![yukata-4800](4800/previews/yukata.png) | | 4320 | 0.931 | [Download](4320/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-4320](4320/previews/pattern_1.png) | ![pattern_2-4320](4320/previews/pattern_2.png) | ![pattern_3-4320](4320/previews/pattern_3.png) | ![pattern_4-4320](4320/previews/pattern_4.png) | ![pattern_5-4320](4320/previews/pattern_5.png) | ![pattern_6-4320](4320/previews/pattern_6.png) | ![pattern_7-4320](4320/previews/pattern_7.png) | ![pattern_8-4320](4320/previews/pattern_8.png) | ![pattern_9-4320](4320/previews/pattern_9.png) | [<NSFW, click to see>](4320/previews/pattern_10.png) | ![pattern_11-4320](4320/previews/pattern_11.png) | ![bikini-4320](4320/previews/bikini.png) | [<NSFW, click to see>](4320/previews/bondage.png) | ![free-4320](4320/previews/free.png) | ![maid-4320](4320/previews/maid.png) | ![miko-4320](4320/previews/miko.png) | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) | ![suit-4320](4320/previews/suit.png) | ![yukata-4320](4320/previews/yukata.png) | | 3840 | 0.918 | [Download](3840/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-3840](3840/previews/pattern_1.png) | ![pattern_2-3840](3840/previews/pattern_2.png) | ![pattern_3-3840](3840/previews/pattern_3.png) | ![pattern_4-3840](3840/previews/pattern_4.png) | ![pattern_5-3840](3840/previews/pattern_5.png) | ![pattern_6-3840](3840/previews/pattern_6.png) | ![pattern_7-3840](3840/previews/pattern_7.png) | ![pattern_8-3840](3840/previews/pattern_8.png) | ![pattern_9-3840](3840/previews/pattern_9.png) | [<NSFW, click to see>](3840/previews/pattern_10.png) | ![pattern_11-3840](3840/previews/pattern_11.png) | ![bikini-3840](3840/previews/bikini.png) | [<NSFW, click to see>](3840/previews/bondage.png) | ![free-3840](3840/previews/free.png) | ![maid-3840](3840/previews/maid.png) | ![miko-3840](3840/previews/miko.png) | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) | ![suit-3840](3840/previews/suit.png) | ![yukata-3840](3840/previews/yukata.png) | | 3360 | 0.911 | [Download](3360/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-3360](3360/previews/pattern_1.png) | ![pattern_2-3360](3360/previews/pattern_2.png) | ![pattern_3-3360](3360/previews/pattern_3.png) | ![pattern_4-3360](3360/previews/pattern_4.png) | ![pattern_5-3360](3360/previews/pattern_5.png) | ![pattern_6-3360](3360/previews/pattern_6.png) | ![pattern_7-3360](3360/previews/pattern_7.png) | ![pattern_8-3360](3360/previews/pattern_8.png) | ![pattern_9-3360](3360/previews/pattern_9.png) | [<NSFW, click to see>](3360/previews/pattern_10.png) | ![pattern_11-3360](3360/previews/pattern_11.png) | ![bikini-3360](3360/previews/bikini.png) | [<NSFW, click to see>](3360/previews/bondage.png) | ![free-3360](3360/previews/free.png) | ![maid-3360](3360/previews/maid.png) | ![miko-3360](3360/previews/miko.png) | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) | ![suit-3360](3360/previews/suit.png) | ![yukata-3360](3360/previews/yukata.png) | | **2880** | **0.955** | [**Download**](2880/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-2880](2880/previews/pattern_1.png) | ![pattern_2-2880](2880/previews/pattern_2.png) | ![pattern_3-2880](2880/previews/pattern_3.png) | ![pattern_4-2880](2880/previews/pattern_4.png) | ![pattern_5-2880](2880/previews/pattern_5.png) | ![pattern_6-2880](2880/previews/pattern_6.png) | ![pattern_7-2880](2880/previews/pattern_7.png) | ![pattern_8-2880](2880/previews/pattern_8.png) | ![pattern_9-2880](2880/previews/pattern_9.png) | [<NSFW, click to see>](2880/previews/pattern_10.png) | ![pattern_11-2880](2880/previews/pattern_11.png) | ![bikini-2880](2880/previews/bikini.png) | [<NSFW, click to see>](2880/previews/bondage.png) | ![free-2880](2880/previews/free.png) | ![maid-2880](2880/previews/maid.png) | ![miko-2880](2880/previews/miko.png) | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) | ![suit-2880](2880/previews/suit.png) | ![yukata-2880](2880/previews/yukata.png) | | 2400 | 0.919 | [Download](2400/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-2400](2400/previews/pattern_1.png) | ![pattern_2-2400](2400/previews/pattern_2.png) | ![pattern_3-2400](2400/previews/pattern_3.png) | ![pattern_4-2400](2400/previews/pattern_4.png) | ![pattern_5-2400](2400/previews/pattern_5.png) | ![pattern_6-2400](2400/previews/pattern_6.png) | ![pattern_7-2400](2400/previews/pattern_7.png) | ![pattern_8-2400](2400/previews/pattern_8.png) | ![pattern_9-2400](2400/previews/pattern_9.png) | [<NSFW, click to see>](2400/previews/pattern_10.png) | ![pattern_11-2400](2400/previews/pattern_11.png) | ![bikini-2400](2400/previews/bikini.png) | [<NSFW, click to see>](2400/previews/bondage.png) | ![free-2400](2400/previews/free.png) | ![maid-2400](2400/previews/maid.png) | ![miko-2400](2400/previews/miko.png) | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) | ![suit-2400](2400/previews/suit.png) | ![yukata-2400](2400/previews/yukata.png) | | 1920 | 0.897 | [Download](1920/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-1920](1920/previews/pattern_1.png) | ![pattern_2-1920](1920/previews/pattern_2.png) | ![pattern_3-1920](1920/previews/pattern_3.png) | ![pattern_4-1920](1920/previews/pattern_4.png) | ![pattern_5-1920](1920/previews/pattern_5.png) | ![pattern_6-1920](1920/previews/pattern_6.png) | ![pattern_7-1920](1920/previews/pattern_7.png) | ![pattern_8-1920](1920/previews/pattern_8.png) | ![pattern_9-1920](1920/previews/pattern_9.png) | [<NSFW, click to see>](1920/previews/pattern_10.png) | ![pattern_11-1920](1920/previews/pattern_11.png) | ![bikini-1920](1920/previews/bikini.png) | [<NSFW, click to see>](1920/previews/bondage.png) | ![free-1920](1920/previews/free.png) | ![maid-1920](1920/previews/maid.png) | ![miko-1920](1920/previews/miko.png) | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) | ![suit-1920](1920/previews/suit.png) | ![yukata-1920](1920/previews/yukata.png) | | 1440 | 0.905 | [Download](1440/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-1440](1440/previews/pattern_1.png) | ![pattern_2-1440](1440/previews/pattern_2.png) | ![pattern_3-1440](1440/previews/pattern_3.png) | ![pattern_4-1440](1440/previews/pattern_4.png) | ![pattern_5-1440](1440/previews/pattern_5.png) | ![pattern_6-1440](1440/previews/pattern_6.png) | ![pattern_7-1440](1440/previews/pattern_7.png) | ![pattern_8-1440](1440/previews/pattern_8.png) | ![pattern_9-1440](1440/previews/pattern_9.png) | [<NSFW, click to see>](1440/previews/pattern_10.png) | ![pattern_11-1440](1440/previews/pattern_11.png) | ![bikini-1440](1440/previews/bikini.png) | [<NSFW, click to see>](1440/previews/bondage.png) | ![free-1440](1440/previews/free.png) | ![maid-1440](1440/previews/maid.png) | ![miko-1440](1440/previews/miko.png) | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) | ![suit-1440](1440/previews/suit.png) | ![yukata-1440](1440/previews/yukata.png) | | 960 | 0.763 | [Download](960/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-960](960/previews/pattern_1.png) | ![pattern_2-960](960/previews/pattern_2.png) | ![pattern_3-960](960/previews/pattern_3.png) | ![pattern_4-960](960/previews/pattern_4.png) | ![pattern_5-960](960/previews/pattern_5.png) | ![pattern_6-960](960/previews/pattern_6.png) | ![pattern_7-960](960/previews/pattern_7.png) | ![pattern_8-960](960/previews/pattern_8.png) | ![pattern_9-960](960/previews/pattern_9.png) | [<NSFW, click to see>](960/previews/pattern_10.png) | ![pattern_11-960](960/previews/pattern_11.png) | ![bikini-960](960/previews/bikini.png) | [<NSFW, click to see>](960/previews/bondage.png) | ![free-960](960/previews/free.png) | ![maid-960](960/previews/maid.png) | ![miko-960](960/previews/miko.png) | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) | ![suit-960](960/previews/suit.png) | ![yukata-960](960/previews/yukata.png) | | 480 | 0.691 | [Download](480/asari_nanami_idolmastercinderellagirls.zip) | ![pattern_1-480](480/previews/pattern_1.png) | ![pattern_2-480](480/previews/pattern_2.png) | ![pattern_3-480](480/previews/pattern_3.png) | ![pattern_4-480](480/previews/pattern_4.png) | ![pattern_5-480](480/previews/pattern_5.png) | ![pattern_6-480](480/previews/pattern_6.png) | ![pattern_7-480](480/previews/pattern_7.png) | ![pattern_8-480](480/previews/pattern_8.png) | ![pattern_9-480](480/previews/pattern_9.png) | [<NSFW, click to see>](480/previews/pattern_10.png) | ![pattern_11-480](480/previews/pattern_11.png) | ![bikini-480](480/previews/bikini.png) | [<NSFW, click to see>](480/previews/bondage.png) | ![free-480](480/previews/free.png) | ![maid-480](480/previews/maid.png) | ![miko-480](480/previews/miko.png) | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) | ![suit-480](480/previews/suit.png) | ![yukata-480](480/previews/yukata.png) |
alexdbz/bert-base-peft-Lora-abstracts-6epochs
alexdbz
2023-09-22T16:40:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-08-31T17:41:31Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
redstonehero/amireal_v4_fixed_vae
redstonehero
2023-09-22T16:37:35Z
109
0
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-22T16:27:49Z
--- license: creativeml-openrail-m library_name: diffusers ---
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_subgraphs_gates
jordyvl
2023-09-22T16:30:26Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-22T14:06:12Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_subgraphs_gates results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-09-22_subgraphs_gates This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0995 - Accuracy: 0.705 - Exit 0 Accuracy: 0.065 - Exit 1 Accuracy: 0.0575 - Exit 2 Accuracy: 0.065 - Exit 3 Accuracy: 0.07 - Exit 4 Accuracy: 0.1575 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 12 - total_train_batch_size: 240 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.9 | 3 | 2.7455 | 0.0975 | 0.0625 | 0.0625 | 0.0875 | 0.0625 | 0.0625 | | No log | 1.9 | 6 | 2.6867 | 0.1475 | 0.06 | 0.0625 | 0.0875 | 0.0625 | 0.0625 | | No log | 2.9 | 9 | 2.6527 | 0.18 | 0.0475 | 0.0575 | 0.09 | 0.0625 | 0.0625 | | No log | 3.9 | 12 | 2.6188 | 0.1975 | 0.035 | 0.0525 | 0.095 | 0.0625 | 0.0625 | | No log | 4.9 | 15 | 2.5827 | 0.22 | 0.035 | 0.05 | 0.0875 | 0.0625 | 0.0625 | | No log | 5.9 | 18 | 2.6265 | 0.1725 | 0.0475 | 0.0375 | 0.0775 | 0.0625 | 0.0825 | | No log | 6.9 | 21 | 2.5771 | 0.23 | 0.0525 | 0.0375 | 0.0775 | 0.0625 | 0.0975 | | No log | 7.9 | 24 | 2.4865 | 0.2675 | 0.055 | 0.0375 | 0.0675 | 0.0625 | 0.08 | | No log | 8.9 | 27 | 2.4184 | 0.25 | 0.0475 | 0.04 | 0.0625 | 0.0625 | 0.0775 | | No log | 9.9 | 30 | 2.3451 | 0.28 | 0.06 | 0.045 | 0.0625 | 0.0625 | 0.085 | | No log | 10.9 | 33 | 2.3027 | 0.3175 | 0.0675 | 0.045 | 0.0625 | 0.0625 | 0.0775 | | No log | 11.9 | 36 | 2.2168 | 0.3225 | 0.07 | 0.045 | 0.0625 | 0.0625 | 0.0775 | | No log | 12.9 | 39 | 2.1698 | 0.365 | 0.07 | 0.045 | 0.0625 | 0.0625 | 0.085 | | No log | 13.9 | 42 | 2.0647 | 0.4125 | 0.065 | 0.045 | 0.0625 | 0.0625 | 0.0875 | | No log | 14.9 | 45 | 2.0370 | 0.4125 | 0.0675 | 0.0475 | 0.0625 | 0.06 | 0.09 | | No log | 15.9 | 48 | 1.9523 | 0.4675 | 0.065 | 0.0475 | 0.0625 | 0.0575 | 0.0975 | | No log | 16.9 | 51 | 1.8758 | 0.4925 | 0.0675 | 0.05 | 0.0625 | 0.055 | 0.1075 | | No log | 17.9 | 54 | 1.7940 | 0.52 | 0.0675 | 0.05 | 0.0625 | 0.0525 | 0.0975 | | No log | 18.9 | 57 | 1.7417 | 0.5125 | 0.0675 | 0.05 | 0.0625 | 0.0525 | 0.125 | | No log | 19.9 | 60 | 1.6789 | 0.565 | 0.0675 | 0.05 | 0.0625 | 0.055 | 0.1325 | | No log | 20.9 | 63 | 1.6016 | 0.5825 | 0.0675 | 0.05 | 0.0625 | 0.0575 | 0.1525 | | No log | 21.9 | 66 | 1.5649 | 0.5775 | 0.065 | 0.05 | 0.0625 | 0.07 | 0.155 | | No log | 22.9 | 69 | 1.5164 | 0.6075 | 0.065 | 0.05 | 0.0625 | 0.0775 | 0.155 | | No log | 23.9 | 72 | 1.4614 | 0.6225 | 0.065 | 0.05 | 0.0625 | 0.085 | 0.1475 | | No log | 24.9 | 75 | 1.4301 | 0.62 | 0.065 | 0.05 | 0.0625 | 0.085 | 0.16 | | No log | 25.9 | 78 | 1.3935 | 0.645 | 0.065 | 0.0525 | 0.065 | 0.0775 | 0.1425 | | No log | 26.9 | 81 | 1.3777 | 0.64 | 0.065 | 0.0525 | 0.065 | 0.08 | 0.1625 | | No log | 27.9 | 84 | 1.3301 | 0.6525 | 0.065 | 0.0525 | 0.065 | 0.0825 | 0.16 | | No log | 28.9 | 87 | 1.3064 | 0.6775 | 0.065 | 0.0525 | 0.065 | 0.0825 | 0.1625 | | No log | 29.9 | 90 | 1.2689 | 0.675 | 0.065 | 0.0525 | 0.065 | 0.085 | 0.1625 | | No log | 30.9 | 93 | 1.2549 | 0.6575 | 0.065 | 0.0525 | 0.065 | 0.075 | 0.175 | | No log | 31.9 | 96 | 1.2440 | 0.66 | 0.065 | 0.055 | 0.065 | 0.08 | 0.1525 | | No log | 32.9 | 99 | 1.2250 | 0.685 | 0.065 | 0.055 | 0.065 | 0.09 | 0.1675 | | No log | 33.9 | 102 | 1.2243 | 0.665 | 0.065 | 0.055 | 0.065 | 0.0775 | 0.155 | | No log | 34.9 | 105 | 1.1960 | 0.6625 | 0.065 | 0.055 | 0.065 | 0.0825 | 0.1675 | | No log | 35.9 | 108 | 1.1879 | 0.685 | 0.065 | 0.055 | 0.065 | 0.08 | 0.155 | | No log | 36.9 | 111 | 1.1710 | 0.6825 | 0.065 | 0.055 | 0.065 | 0.0825 | 0.155 | | No log | 37.9 | 114 | 1.1739 | 0.6925 | 0.065 | 0.055 | 0.065 | 0.0675 | 0.1525 | | No log | 38.9 | 117 | 1.1414 | 0.7 | 0.065 | 0.055 | 0.065 | 0.0725 | 0.1475 | | No log | 39.9 | 120 | 1.1476 | 0.695 | 0.065 | 0.0575 | 0.065 | 0.0775 | 0.1525 | | No log | 40.9 | 123 | 1.1435 | 0.6875 | 0.065 | 0.0575 | 0.065 | 0.0825 | 0.15 | | No log | 41.9 | 126 | 1.1196 | 0.695 | 0.065 | 0.0575 | 0.065 | 0.075 | 0.165 | | No log | 42.9 | 129 | 1.1454 | 0.7 | 0.065 | 0.0575 | 0.065 | 0.0775 | 0.155 | | No log | 43.9 | 132 | 1.1133 | 0.7 | 0.065 | 0.0575 | 0.065 | 0.075 | 0.1575 | | No log | 44.9 | 135 | 1.1205 | 0.7125 | 0.065 | 0.0575 | 0.065 | 0.08 | 0.1625 | | No log | 45.9 | 138 | 1.1188 | 0.705 | 0.065 | 0.0575 | 0.065 | 0.0775 | 0.1525 | | No log | 46.9 | 141 | 1.1076 | 0.7025 | 0.065 | 0.0575 | 0.065 | 0.0775 | 0.155 | | No log | 47.9 | 144 | 1.1304 | 0.7075 | 0.065 | 0.0575 | 0.065 | 0.08 | 0.16 | | No log | 48.9 | 147 | 1.1029 | 0.7125 | 0.065 | 0.0575 | 0.065 | 0.08 | 0.16 | | No log | 49.9 | 150 | 1.0979 | 0.705 | 0.065 | 0.0575 | 0.065 | 0.08 | 0.16 | | No log | 50.9 | 153 | 1.1081 | 0.71 | 0.065 | 0.0575 | 0.065 | 0.0775 | 0.155 | | No log | 51.9 | 156 | 1.1023 | 0.71 | 0.065 | 0.0575 | 0.065 | 0.0725 | 0.16 | | No log | 52.9 | 159 | 1.1029 | 0.705 | 0.065 | 0.0575 | 0.065 | 0.07 | 0.16 | | No log | 53.9 | 162 | 1.1015 | 0.71 | 0.065 | 0.0575 | 0.065 | 0.07 | 0.155 | | No log | 54.9 | 165 | 1.0968 | 0.7125 | 0.065 | 0.0575 | 0.065 | 0.0725 | 0.155 | | No log | 55.9 | 168 | 1.0968 | 0.7125 | 0.065 | 0.0575 | 0.065 | 0.0725 | 0.16 | | No log | 56.9 | 171 | 1.1006 | 0.7025 | 0.065 | 0.0575 | 0.065 | 0.075 | 0.155 | | No log | 57.9 | 174 | 1.1024 | 0.7025 | 0.065 | 0.0575 | 0.065 | 0.07 | 0.1575 | | No log | 58.9 | 177 | 1.1006 | 0.705 | 0.065 | 0.0575 | 0.065 | 0.07 | 0.1575 | | No log | 59.9 | 180 | 1.0995 | 0.705 | 0.065 | 0.0575 | 0.065 | 0.07 | 0.1575 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
mohammadh128/whisper_small-fa_v02
mohammadh128
2023-09-22T16:18:48Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "fa", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-09T21:33:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: whisper_small-fa_v02 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 fa type: mozilla-foundation/common_voice_11_0 config: fa split: test metrics: - name: Wer type: wer value: 30.9315 language: - fa --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_small-fa_v02 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 fa dataset. We also did data augmentation using audiomentations library. It achieves the following results on the evaluation set: - Loss: 0.2291 - Wer: 30.3423 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure You can Find the notebooks [here](https://github.com/mohammadh128/Persian_ASR). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |:----:|:-------------:|:---------------:|:-------:| | 500 | 1.770700 | 0.476709 | 52.29181| | 1000 | 0.762300 | 0.368512 | 41.83410| | 1500 | 0.645000 | 0.323680 | 37.57881| | 2000 | 0.601900 | 0.297370 | 36.43209| | 2500 | 0.529700 | 0.276422 | 33.52608| | 3000 | 0.523200 | 0.260825 | 31.94485| | 3500 | 0.488400 | 0.249957 | 33.11771| | 4000 | 0.464800 | 0.241462 | 30.34238| | 4500 | 0.440500 | 0.233215 | 31.04969| | 5000 | 0.440500 | 0.229116 | 30.73605| ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.3
cegy369/sk-squad-falcon-7b
cegy369
2023-09-22T16:14:34Z
0
3
null
[ "falcon-7b-instruct", "falcon", "text-generation", "sk", "dataset:TUKE-DeutscheTelekom/skquad", "region:us" ]
text-generation
2023-09-20T17:04:35Z
--- pipeline_tag: text-generation datasets: - TUKE-DeutscheTelekom/skquad language: - sk tags: - falcon-7b-instruct - falcon - text-generation ---
luanvuvt/results
luanvuvt
2023-09-22T16:12:52Z
0
0
null
[ "generated_from_trainer", "base_model:TinyPixel/Llama-2-7B-bf16-sharded", "base_model:finetune:TinyPixel/Llama-2-7B-bf16-sharded", "region:us" ]
null
2023-09-22T16:12:27Z
--- base_model: TinyPixel/Llama-2-7B-bf16-sharded tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 35 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
aramka/arka
aramka
2023-09-22T16:11:27Z
0
0
allennlp
[ "allennlp", "biology", "question-answering", "ar", "dataset:allenai/dolma", "license:afl-3.0", "region:us" ]
question-answering
2022-05-24T22:59:55Z
--- license: afl-3.0 datasets: - allenai/dolma language: - ar metrics: - bertscore library_name: allennlp pipeline_tag: question-answering tags: - biology ---
iainmcqueen/output
iainmcqueen
2023-09-22T16:04:24Z
30
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-22T15:52:59Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of ctstech dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - iainmcqueen/output This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of ctstech dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
csdc-atl/internlm-chat-20b-GPTQ-Int4
csdc-atl
2023-09-22T15:57:06Z
19
2
transformers
[ "transformers", "safetensors", "internlm", "feature-extraction", "text-generation", "custom_code", "license:apache-2.0", "4-bit", "gptq", "region:us" ]
text-generation
2023-09-21T17:06:23Z
--- license: apache-2.0 pipeline_tag: text-generation --- **InternLM** <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) </div> ## Introduction The Shanghai Artificial Intelligence Laboratory, in collaboration with SenseTime Technology, the Chinese University of Hong Kong, and Fudan University, has officially released the 20 billion parameter pretrained model, InternLM-20B. InternLM-20B was pre-trained on over **2.3T** Tokens containing high-quality English, Chinese, and code data. Additionally, the Chat version has undergone SFT and RLHF training, enabling it to better and more securely meet users' needs. In terms of model structure, InternLM-20B opted for a deeper architecture, with a depth set at 60 layers. This surpasses the conventional 7B and 13B models that utilize 32 or 40 layers. When parameters are limited, increasing the number of layers can enhance the model's overall capability. Furthermore, compared to InternLM-7B, the pre-training data used for InternLM-20B underwent higher quality cleansing and was supplemented with data rich in knowledge and designed for reinforcing understanding and reasoning capabilities. As a result, it exhibits significant improvements in understanding, reasoning, mathematical, and programming abilities—all of which test the technical proficiency of language models. Overall, InternLM-20B features the following characteristics: - Outstanding overall performance - Strong utility invocation capability - Supports a 16k context length (Through infererence extrapolation) - Better value alignment. ## Performance Evaluation On the 5 capability dimensions proposed by OpenCompass, InternLM-20B has achieved excellent results (the bolded scores represent the best performances within the 13B-33B parameter range). | Capability | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |----------|-----------|------------|---------------|--------------|-----------|-----------|------------| | Language | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 | | Knowledge | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 | | Understanding | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 | | Reasoning | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 | | Examination | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 | | Overall | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 | The table below compares the performance of mainstream open-source models on some influential and typical datasets. | | Benchmarks | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------| | Examination | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 | | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 | | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 | | Knowledge | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 | | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 | | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 | | Understanding | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 | | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 | | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 | | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 | | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 | | Reasoning | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 | | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 | | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 | | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 | | Programming | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 | | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 | Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B. ## Import from Transformers To load the InternLM 20B model using Transformers, use the following code: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda() >>> model = model.eval() >>> output, history = model.chat(tokenizer, "Hello! Today is sunny, it is time to go out") >>> print(output) Hello! Today is sunny, and it sounds like a great day to go out an enjoy the weather. What would you like to do? ``` **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>. ## 简介 上海人工智能实验室与商汤科技联合香港中文大学和复旦大学正式推出书生·浦语200亿参数模型版本 InternLM-20B ,InternLM-20B 在超过 **2.3T** Tokens 包含高质量英文、中文和代码的数据上进行预训练,其中 Chat 版本还经过了 SFT 和 RLHF 训练,使其能够更好、更安全地满足用户的需求。 InternLM 20B 在模型结构上选择了深结构,层数设定为60层,超过常规7B和13B模型所使用的32层或者40层。在参数受限的情况下,提高层数有利于提高模型的综合能力。此外,相较于InternLM-7B,InternLM-20B使用的预训练数据经过了更高质量的清洗,并补充了高知识密度和用于强化理解与推理能力的训练数据。因此,它在理解能力、推理能力、数学能力、编程能力等考验语言模型技术水平的方面都得到了显著提升。总体而言,InternLM-20B具有以下的特点: - 优异的综合性能 - 很强的工具调用功能 - 支持16k语境长度(通过推理时外推) - 更好的价值对齐 ## 性能评测 在OpenCompass提出的5个能力维度上,InternLM-20B都取得很好的效果(粗体为13B-33B这个量级范围内,各项最佳成绩) | 能力维度 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |----------|-----------|------------|---------------|--------------|-----------|-----------|------------| | 语言 | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 | | 知识 | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 | | 理解 | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 | | 推理 | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 | | 学科 | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 | | 总平均 | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 | 下表展示了在多个经典数据集上 InternLM 20B 与各个主流开源模型的表现 | | 评测集 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------| | 学科 | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 | | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 | | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 | | 知识 | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 | | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 | | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 | | 理解 | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 | | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 | | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 | | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 | | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 | | 推理 | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 | | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 | | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 | | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 | | 编程 | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 | | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 | 总体而言,InternLM-20B 在综合能力上全面领先于13B量级的开源模型,同时在推理评测集上能够接近甚至超越Llama-65B的性能。 ## 通过 Transformers 加载 通过以下的代码加载 InternLM 20B 模型 ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True).cuda() >>> model = model.eval() >>> output, history = model.chat(tokenizer, "你好呀!今天天气真好") >>> print(output) 你好!是的,今天的天气非常晴朗,非常适合户外活动。 ``` **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
yumujiu/llama2-qlora-finetunined-french
yumujiu
2023-09-22T15:55:46Z
1
0
peft
[ "peft", "region:us" ]
null
2023-09-22T15:55:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
CyberHarem/mizuno_midori_idolmastercinderellagirls
CyberHarem
2023-09-22T15:41:46Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/mizuno_midori_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T15:29:46Z
--- license: mit datasets: - CyberHarem/mizuno_midori_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of mizuno_midori_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/mizuno_midori_idolmastercinderellagirls.pt` as the embedding and `4760/mizuno_midori_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.915. The trigger words are: 1. `mizuno_midori_idolmastercinderellagirls` 2. `black_hair, long_hair, ponytail, brown_eyes, blush, breasts` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.873 | [Download](5100/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | [<NSFW, click to see>](5100/previews/pattern_6.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.915** | [**Download**](4760/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | [<NSFW, click to see>](4760/previews/pattern_6.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.908 | [Download](4420/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | [<NSFW, click to see>](4420/previews/pattern_6.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.913 | [Download](4080/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | [<NSFW, click to see>](4080/previews/pattern_6.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.854 | [Download](3740/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | [<NSFW, click to see>](3740/previews/pattern_6.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.867 | [Download](3400/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | [<NSFW, click to see>](3400/previews/pattern_6.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.871 | [Download](3060/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | [<NSFW, click to see>](3060/previews/pattern_6.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.845 | [Download](2720/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | [<NSFW, click to see>](2720/previews/pattern_6.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.856 | [Download](2380/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | [<NSFW, click to see>](2380/previews/pattern_6.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.863 | [Download](2040/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | [<NSFW, click to see>](2040/previews/pattern_6.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.829 | [Download](1700/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | [<NSFW, click to see>](1700/previews/pattern_6.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.831 | [Download](1360/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | [<NSFW, click to see>](1360/previews/pattern_6.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.815 | [Download](1020/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | [<NSFW, click to see>](1020/previews/pattern_6.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.768 | [Download](680/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | [<NSFW, click to see>](680/previews/pattern_6.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.561 | [Download](340/mizuno_midori_idolmastercinderellagirls.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | [<NSFW, click to see>](340/previews/pattern_6.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
LarryAIDraw/saigyouji_yuyuko_touhou
LarryAIDraw
2023-09-22T15:25:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-09-13T16:05:36Z
--- license: creativeml-openrail-m --- https://civitai.com/models/131028/saigyoujiyuyuko-touhou
tombm/bert-base-uncased-finetuned-cola
tombm
2023-09-22T15:21:45Z
31
0
transformers
[ "transformers", "pytorch", "bert", "generated_from_trainer", "dataset:glue", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-09-22T12:36:41Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5931 - Matthews Correlation: 0.5598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5206 | 1.0 | 535 | 0.4503 | 0.4984 | | 0.347 | 2.0 | 1070 | 0.4558 | 0.5816 | | 0.2461 | 3.0 | 1605 | 0.5931 | 0.5598 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
fgiannini69/ppo-Huggy
fgiannini69
2023-09-22T15:09:47Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-09-22T15:09:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: fgiannini69/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
VuongQuoc/checkpoints_2_microsoft_deberta_21_9
VuongQuoc
2023-09-22T15:07:19Z
5
0
transformers
[ "transformers", "pytorch", "deberta-v2", "multiple-choice", "generated_from_trainer", "endpoints_compatible", "region:us" ]
multiple-choice
2023-09-21T04:50:45Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: checkpoints_2_microsoft_deberta_21_9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # checkpoints_2_microsoft_deberta_21_9 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8635 - Map@3: 0.8558 - Accuracy: 0.76 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Map@3 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 0.6215 | 0.15 | 300 | 0.6511 | 0.8592 | 0.76 | | 0.5953 | 0.3 | 600 | 0.6929 | 0.8533 | 0.765 | | 0.5332 | 0.45 | 900 | 0.6665 | 0.8525 | 0.76 | | 0.587 | 0.6 | 1200 | 0.6638 | 0.855 | 0.775 | | 0.5626 | 0.75 | 1500 | 0.6476 | 0.8692 | 0.78 | | 0.6712 | 0.9 | 1800 | 0.6499 | 0.8700 | 0.785 | | 0.2181 | 1.05 | 2100 | 0.8619 | 0.8417 | 0.75 | | 0.2024 | 1.2 | 2400 | 0.8607 | 0.8467 | 0.75 | | 0.2571 | 1.35 | 2700 | 0.8282 | 0.8483 | 0.75 | | 0.2407 | 1.5 | 3000 | 0.8297 | 0.8558 | 0.765 | | 0.2282 | 1.65 | 3300 | 0.8635 | 0.8558 | 0.76 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
ShivamMangale/XLM-Roberta-base-finetuned-squad-only-es-12-epoch
ShivamMangale
2023-09-22T15:06:43Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-09-22T14:26:41Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - squad model-index: - name: XLM-Roberta-base-finetuned-squad-only-es-12-epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLM-Roberta-base-finetuned-squad-only-es-12-epoch This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
zyongqing/q-FrozenLake-v1-4x4-noSlippery
zyongqing
2023-09-22T15:04:32Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-09-22T15:04:31Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="zyongqing/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
necoarc/spatchgan-model
necoarc
2023-09-22T15:02:45Z
0
1
null
[ "arxiv:2103.16219", "license:bsd-3-clause", "region:us" ]
null
2023-09-22T14:05:05Z
--- license: bsd-3-clause --- Pretrained models for SPatchGAN (ICCV 2021) [<img src='https://img.shields.io/badge/arXiv-2103.16219-blue'>](https://arxiv.org/abs/2103.16219) [<img src='https://img.shields.io/badge/code-github-red'>](https://github.com/NetEase-GameAI/SPatchGAN)
CyberHarem/furusawa_yoriko_idolmastercinderellagirls
CyberHarem
2023-09-22T14:51:35Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/furusawa_yoriko_idolmastercinderellagirls", "license:mit", "region:us" ]
text-to-image
2023-09-22T14:42:20Z
--- license: mit datasets: - CyberHarem/furusawa_yoriko_idolmastercinderellagirls pipeline_tag: text-to-image tags: - art --- # Lora of furusawa_yoriko_idolmastercinderellagirls This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3400, you need to download `3400/furusawa_yoriko_idolmastercinderellagirls.pt` as the embedding and `3400/furusawa_yoriko_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3400**, with the score of 0.965. The trigger words are: 1. `furusawa_yoriko_idolmastercinderellagirls` 2. `blue_eyes, brown_hair, long_hair, glasses, mole, mole_under_eye, smile, hairband, blush` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.954 | [Download](5100/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | [<NSFW, click to see>](5100/previews/pattern_3.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.925 | [Download](4760/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | [<NSFW, click to see>](4760/previews/pattern_3.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.943 | [Download](4420/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | [<NSFW, click to see>](4420/previews/pattern_3.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.945 | [Download](4080/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | [<NSFW, click to see>](4080/previews/pattern_3.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.940 | [Download](3740/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | [<NSFW, click to see>](3740/previews/pattern_3.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | **3400** | **0.965** | [**Download**](3400/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | [<NSFW, click to see>](3400/previews/pattern_3.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.927 | [Download](3060/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | [<NSFW, click to see>](3060/previews/pattern_3.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.922 | [Download](2720/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | [<NSFW, click to see>](2720/previews/pattern_3.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.895 | [Download](2380/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | [<NSFW, click to see>](2380/previews/pattern_3.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.907 | [Download](2040/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | [<NSFW, click to see>](2040/previews/pattern_3.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.914 | [Download](1700/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | [<NSFW, click to see>](1700/previews/pattern_3.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.920 | [Download](1360/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | [<NSFW, click to see>](1360/previews/pattern_3.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.859 | [Download](1020/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | [<NSFW, click to see>](1020/previews/pattern_3.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.810 | [Download](680/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | [<NSFW, click to see>](680/previews/pattern_3.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.653 | [Download](340/furusawa_yoriko_idolmastercinderellagirls.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | [<NSFW, click to see>](340/previews/pattern_3.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |