|
--- |
|
license: other |
|
license_name: fair-ai-public-license-1.0-sd |
|
license_link: https://freedevproject.org/faipl-1.0-sd/ |
|
language: |
|
- en |
|
base_model: |
|
- Laxhar/noobai-XL-Vpred-experiments |
|
pipeline_tag: text-to-image |
|
tags: |
|
- safetensors |
|
- diffusers |
|
- stable-diffusion |
|
- stable-diffusion-xl |
|
- art |
|
library_name: diffusers |
|
--- |
|
Fix using similar method of NoobaiCyberFix (https://civitai.com/models/913998/noobaicyberfix?modelVersionId=1022962) but using the vpred model (0.75S), to fix anatomy. |
|
|
|
(Comparison image is pending, but this image was generated with the model) |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/649608ca0b01497fb78e2e5c/cT-4Ds516C77fBzxTLD4q.png) |
|
|
|
<h1 align="center"><strong style="font-size: 48px;">NoobAI XL V-Pred 0.75s</strong></h1> |
|
|
|
# Model Introduction |
|
|
|
This image generation model, based on Laxhar/noobai-XL_v1.0, leverages full Danbooru and e621 datasets with native tags and natural language captioning. |
|
|
|
Implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations - detailed in following sections. |
|
|
|
Special thanks to my teammate euge for the coding work, and we're grateful for the technical support from many helpful community members. |
|
|
|
# ⚠️ IMPORTANT NOTICE ⚠️ |
|
|
|
## **THIS MODEL WORKS DIFFERENT FROM EPS MODELS!** |
|
|
|
## **PLEASE READ THE GUIDE CAREFULLY!** |
|
|
|
## Model Details |
|
|
|
- **Developed by**: [Laxhar Lab](https://huggingface.co/Laxhar) |
|
- **Model Type**: Diffusion-based text-to-image generative model |
|
- **Fine-tuned from**: Laxhar/noobai-XL_v1.0 |
|
- **Sponsored by from**: [Lanyun Cloud](https://cloud.lanyun.net) |
|
|
|
--- |
|
|
|
# How to Use the Model. |
|
|
|
## Method I: [reForge](https://github.com/Panchovix/stable-diffusion-webui-reForge/tree/dev_upstream) |
|
|
|
1. (If you haven't installed reForge) Install reForge by following the instructions in the repository; |
|
|
|
2. Launch WebUI and use the model as usual! |
|
|
|
## Method II: [ComfyUI](https://github.com/comfyanonymous/ComfyUI) |
|
|
|
SAMLPLE with NODES |
|
|
|
[comfy_ui_workflow_sample](/Laxhar/noobai-XL-Vpred-0.5/blob/main/comfy_ui_workflow_sample.png) |
|
|
|
|
|
## Method III: [WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) |
|
|
|
Note that dev branch is not stable and **may contain bugs**. |
|
|
|
1. (If you haven't installed WebUI) Install WebUI by following the instructions in the repository. For simp |
|
2. Switch to `dev` branch: |
|
|
|
```bash |
|
git switch dev |
|
``` |
|
|
|
3. Pull latest updates: |
|
|
|
```bash |
|
git pull |
|
``` |
|
|
|
4. Launch WebUI and use the model as usual! |
|
|
|
## Method IV: [Diffusers](https://huggingface.co/docs/diffusers/en/index) |
|
|
|
```python |
|
import torch |
|
from diffusers import StableDiffusionXLPipeline |
|
from diffusers import EulerDiscreteScheduler |
|
|
|
ckpt_path = "/path/to/model.safetensors" |
|
pipe = StableDiffusionXLPipeline.from_single_file( |
|
ckpt_path, |
|
use_safetensors=True, |
|
torch_dtype=torch.float16, |
|
) |
|
scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True} |
|
pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args) |
|
pipe.enable_xformers_memory_efficient_attention() |
|
pipe = pipe.to("cuda") |
|
|
|
prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)""" |
|
negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro" |
|
|
|
image = pipe( |
|
prompt=prompt, |
|
negative_prompt=negative_prompt, |
|
width=832, |
|
height=1216, |
|
num_inference_steps=28, |
|
guidance_scale=5, |
|
generator=torch.Generator().manual_seed(42), |
|
).images[0] |
|
|
|
image.save("output.png") |
|
``` |
|
|
|
|
|
**Note**: Please make sure Git is installed and environment is properly configured on your machine. |
|
|
|
--- |
|
|
|
# Recommended Settings |
|
|
|
## Parameters |
|
|
|
- CFG: 4 ~ 5 |
|
- Steps: 28 ~ 35 |
|
- Sampling Method: **Euler** (⚠️ Other samplers will not work properly) |
|
- Resolution: Total area around 1024x1024. Best to choose from: 768x1344, **832x1216**, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768 |
|
|
|
## Prompts |
|
|
|
- Prompt Prefix: |
|
|
|
``` |
|
masterpiece, best quality, newest, absurdres, highres, safe, |
|
``` |
|
|
|
- Negative Prompt: |
|
|
|
``` |
|
nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro |
|
``` |
|
|
|
# Usage Guidelines |
|
|
|
## Caption |
|
|
|
``` |
|
<1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>, <other tags> |
|
``` |
|
|
|
## Quality Tags |
|
|
|
For quality tags, we evaluated image popularity through the following process: |
|
|
|
- Data normalization based on various sources and ratings. |
|
- Application of time-based decay coefficients according to date recency. |
|
- Ranking of images within the entire dataset based on this processing. |
|
|
|
Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years. |
|
|
|
| Percentile Range | Quality Tags | |
|
| :--------------- | :------------- | |
|
| > 95th | masterpiece | |
|
| > 85th, <= 95th | best quality | |
|
| > 60th, <= 85th | good quality | |
|
| > 30th, <= 60th | normal quality | |
|
| <= 30th | worst quality | |
|
|
|
## Aesthetic Tags |
|
|
|
| Tag | Description | |
|
| :-------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
|
| very awa | Top 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) | |
|
| worst aesthetic | All the bottom 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) and [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2) | |
|
| ... | ... | |
|
|
|
## Date Tags |
|
|
|
There are two types of date tags: **year tags** and **period tags**. For year tags, use `year xxxx` format, i.e., `year 2021`. For period tags, please refer to the following table: |
|
|
|
| Year Range | Period tag | |
|
| :--------- | :--------- | |
|
| 2005-2010 | old | |
|
| 2011-2014 | early | |
|
| 2014-2017 | mid | |
|
| 2018-2020 | recent | |
|
| 2021-2024 | newest | |
|
|
|
## Dataset |
|
|
|
- The latest Danbooru images up to the training date (approximately before 2024-10-23) |
|
- E621 images [e621-2024-webp-4Mpixel](https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel) dataset on Hugging Face |
|
|
|
**Communication** |
|
|
|
- **QQ Groups:** |
|
|
|
- 875042008 |
|
- 914818692 |
|
- 635772191 |
|
|
|
- **Discord:** [Laxhar Dream Lab SDXL NOOB](https://discord.com/invite/DKnFjKEEvH) |
|
|
|
**How to train a LoRA on v-pred SDXL model** |
|
|
|
A tutorial is intended for LoRA trainers based on sd-scripts. |
|
|
|
article link: https://civitai.com/articles/8723 |
|
|
|
**Utility Tool** |
|
|
|
Laxhar Lab is training a dedicated ControlNet model for NoobXL, and the models are being released progressively. So far, the normal, depth, and canny have been released. |
|
|
|
Model link: https://civitai.com/models/929685 |
|
|
|
# Model License |
|
|
|
This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license. |
|
|
|
## I. Usage Restrictions |
|
|
|
- Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation. |
|
- Prohibited generation of unethical or offensive content. |
|
- Prohibited violation of laws and regulations in the user's jurisdiction. |
|
|
|
## II. Commercial Prohibition |
|
|
|
We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products. |
|
|
|
## III. Open Source Community |
|
|
|
To foster a thriving open-source community,users MUST comply with the following requirements: |
|
|
|
- Open source derivative models, merged models, LoRAs, and products based on the above models. |
|
- Share work details such as synthesis formulas, prompts, and workflows. |
|
- Follow the fair-ai-public-license to ensure derivative works remain open source. |
|
|
|
## IV. Disclaimer |
|
|
|
Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage. |
|
|
|
# Participants and Contributors |
|
|
|
## Participants |
|
|
|
- **L_A_X:** [Civitai](https://civitai.com/user/L_A_X) | [Liblib.art](https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69) | [Huggingface](https://huggingface.co/LAXMAYDAY) |
|
- **li_li:** [Civitai](https://civitai.com/user/li_li) | [Huggingface](https://huggingface.co/heziiiii) |
|
- **nebulae:** [Civitai](https://civitai.com/user/kitarz) | [Huggingface](https://huggingface.co/NebulaeWis) |
|
- **Chenkin:** [Civitai](https://civitai.com/user/Chenkin) | [Huggingface](https://huggingface.co/windsingai) |
|
- **Euge:** [Civitai](https://civitai.com/user/Euge_) | [Huggingface](https://huggingface.co/Eugeoter) | [Github](https://github.com/Eugeoter) |
|
|
|
## Contributors |
|
|
|
- **Narugo1992**: Thanks to [narugo1992](https://github.com/narugo1992) and the [deepghs](https://huggingface.co/deepghs) team for open-sourcing various training sets, image processing tools, and models. |
|
|
|
- **Mikubill**: Thanks to [Mikubill](https://github.com/Mikubill) for the [Naifu](https://github.com/Mikubill/naifu) trainer. |
|
|
|
- **Onommai**: Thanks to [OnommAI](https://onomaai.com/) for open-sourcing a powerful base model. |
|
|
|
- **V-Prediction**: Thanks to the following individuals for their detailed instructions and experiments. |
|
|
|
- adsfssdf |
|
- [bluvoll](https://civitai.com/user/bluvoll) |
|
- [bvhari](https://github.com/bvhari) |
|
- [catboxanon](https://github.com/catboxanon) |
|
- [parsee-mizuhashi](https://huggingface.co/parsee-mizuhashi) |
|
- [very-aesthetic](https://github.com/very-aesthetic) |
|
- [momoura](https://civitai.com/user/momoura) |
|
- madmanfourohfour |
|
|
|
- **Community**: [aria1th261](https://civitai.com/user/aria1th261), [neggles](https://github.com/neggles/neurosis), [sdtana](https://huggingface.co/sdtana), [chewing](https://huggingface.co/chewing), [irldoggo](https://github.com/irldoggo), [reoe](https://huggingface.co/reoe), [kblueleaf](https://civitai.com/user/kblueleaf), [Yidhar](https://github.com/Yidhar), ageless, 白玲可, Creeper, KaerMorh, 吟游诗人, SeASnAkE, [zwh20081](https://civitai.com/user/zwh20081), Wenaka~喵, 稀里哗啦, 幸运二副, 昨日の約, 445, [EBIX](https://civitai.com/user/EBIX), [Sopp](https://huggingface.co/goyishsoyish), [Y_X](https://civitai.com/user/Y_X), [Minthybasis](https://civitai.com/user/Minthybasis), [Rakosz](https://civitai.com/user/Rakosz) |