Panchovix commited on
Commit
2019704
1 Parent(s): 3fdd4bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +272 -5
README.md CHANGED
@@ -1,5 +1,272 @@
1
- ---
2
- license: other
3
- license_name: other
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: fair-ai-public-license-1.0-sd
4
+ license_link: https://freedevproject.org/faipl-1.0-sd/
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Laxhar/noobai-XL-Vpred-0.75s
9
+ pipeline_tag: text-to-image
10
+ tags:
11
+ - safetensors
12
+ - diffusers
13
+ - stable-diffusion
14
+ - stable-diffusion-xl
15
+ - art
16
+ library_name: diffusers
17
+ ---
18
+ Fix using similar method of NoobaiCyberFix (https://civitai.com/models/913998/noobaicyberfix?modelVersionId=1022962) but using the 0.75 vpred model, while also doing it with perpendicular using sd_mecha, recipe from: https://huggingface.co/Doctor-Shotgun/NoobAI-XL-Merges
19
+
20
+
21
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/649608ca0b01497fb78e2e5c/WfVAAFdLTVHgEC2NqnuJE.jpeg)
22
+
23
+ <h1 align="center"><strong style="font-size: 48px;">NoobAI XL V-Pred 0.75</strong></h1>
24
+
25
+ # Model Introduction
26
+
27
+ This image generation model, based on Laxhar/noobai-XL_v1.0, leverages full Danbooru and e621 datasets with native tags and natural language captioning.
28
+
29
+ Implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations - detailed in following sections.
30
+
31
+ Special thanks to my teammate euge for the coding work, and we're grateful for the technical support from many helpful community members.
32
+
33
+ # ⚠️ IMPORTANT NOTICE ⚠️
34
+
35
+ ## **THIS MODEL WORKS DIFFERENT FROM EPS MODELS!**
36
+
37
+ ## **PLEASE READ THE GUIDE CAREFULLY!**
38
+
39
+ ## Model Details
40
+
41
+ - **Developed by**: [Laxhar Lab](https://huggingface.co/Laxhar)
42
+ - **Model Type**: Diffusion-based text-to-image generative model
43
+ - **Fine-tuned from**: Laxhar/noobai-XL_v1.0
44
+ - **Sponsored by from**: [Lanyun Cloud](https://cloud.lanyun.net)
45
+
46
+ ---
47
+
48
+ # How to Use the Model.
49
+
50
+ ## Method I: [reForge](https://github.com/Panchovix/stable-diffusion-webui-reForge/tree/dev_upstream)
51
+
52
+ 1. (If you haven't installed reForge) Install reForge by following the instructions in the repository;
53
+
54
+ 2. Launch WebUI and use the model as usual!
55
+
56
+ ## Method II: [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
57
+
58
+ SAMLPLE with NODES
59
+
60
+ [comfy_ui_workflow_sample](/Laxhar/noobai-XL-Vpred-0.5/blob/main/comfy_ui_workflow_sample.png)
61
+
62
+
63
+ ## Method III: [WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
64
+
65
+ Note that dev branch is not stable and **may contain bugs**.
66
+
67
+ 1. (If you haven't installed WebUI) Install WebUI by following the instructions in the repository. For simp
68
+ 2. Switch to `dev` branch:
69
+
70
+ ```bash
71
+ git switch dev
72
+ ```
73
+
74
+ 3. Pull latest updates:
75
+
76
+ ```bash
77
+ git pull
78
+ ```
79
+
80
+ 4. Launch WebUI and use the model as usual!
81
+
82
+ ## Method IV: [Diffusers](https://huggingface.co/docs/diffusers/en/index)
83
+
84
+ ```python
85
+ import torch
86
+ from diffusers import StableDiffusionXLPipeline
87
+ from diffusers import EulerDiscreteScheduler
88
+
89
+ ckpt_path = "/path/to/model.safetensors"
90
+ pipe = StableDiffusionXLPipeline.from_single_file(
91
+ ckpt_path,
92
+ use_safetensors=True,
93
+ torch_dtype=torch.float16,
94
+ )
95
+ scheduler_args = {"prediction_type": "v_prediction", "rescale_betas_zero_snr": True}
96
+ pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, **scheduler_args)
97
+ pipe.enable_xformers_memory_efficient_attention()
98
+ pipe = pipe.to("cuda")
99
+
100
+ prompt = """masterpiece, best quality,artist:john_kafka,artist:nixeu,artist:quasarcake, chromatic aberration, film grain, horror \(theme\), limited palette, x-shaped pupils, high contrast, color contrast, cold colors, arlecchino \(genshin impact\), black theme, gritty, graphite \(medium\)"""
101
+ negative_prompt = "nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro"
102
+
103
+ image = pipe(
104
+ prompt=prompt,
105
+ negative_prompt=negative_prompt,
106
+ width=832,
107
+ height=1216,
108
+ num_inference_steps=28,
109
+ guidance_scale=5,
110
+ generator=torch.Generator().manual_seed(42),
111
+ ).images[0]
112
+
113
+ image.save("output.png")
114
+ ```
115
+
116
+
117
+ **Note**: Please make sure Git is installed and environment is properly configured on your machine.
118
+
119
+ ---
120
+
121
+ # Recommended Settings
122
+
123
+ ## Parameters
124
+
125
+ - CFG: 4 ~ 5
126
+ - Steps: 28 ~ 35
127
+ - Sampling Method: **Euler** (⚠️ Other samplers will not work properly)
128
+ - Resolution: Total area around 1024x1024. Best to choose from: 768x1344, **832x1216**, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768
129
+
130
+ ## Prompts
131
+
132
+ - Prompt Prefix:
133
+
134
+ ```
135
+ masterpiece, best quality, newest, absurdres, highres, safe,
136
+ ```
137
+
138
+ - Negative Prompt:
139
+
140
+ ```
141
+ nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthro
142
+ ```
143
+
144
+ # Usage Guidelines
145
+
146
+ ## Caption
147
+
148
+ ```
149
+ <1girl/1boy/1other/...>, <character>, <series>, <artists>, <special tags>, <general tags>, <other tags>
150
+ ```
151
+
152
+ ## Quality Tags
153
+
154
+ For quality tags, we evaluated image popularity through the following process:
155
+
156
+ - Data normalization based on various sources and ratings.
157
+ - Application of time-based decay coefficients according to date recency.
158
+ - Ranking of images within the entire dataset based on this processing.
159
+
160
+ Our ultimate goal is to ensure that quality tags effectively track user preferences in recent years.
161
+
162
+ | Percentile Range | Quality Tags |
163
+ | :--------------- | :------------- |
164
+ | > 95th | masterpiece |
165
+ | > 85th, <= 95th | best quality |
166
+ | > 60th, <= 85th | good quality |
167
+ | > 30th, <= 60th | normal quality |
168
+ | <= 30th | worst quality |
169
+
170
+ ## Aesthetic Tags
171
+
172
+ | Tag | Description |
173
+ | :-------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
174
+ | very awa | Top 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) |
175
+ | worst aesthetic | All the bottom 5% of images in terms of aesthetic score by [waifu-scorer](https://huggingface.co/Eugeoter/waifu-scorer-v4-beta) and [aesthetic-shadow-v2](https://huggingface.co/shadowlilac/aesthetic-shadow-v2) |
176
+ | ... | ... |
177
+
178
+ ## Date Tags
179
+
180
+ There are two types of date tags: **year tags** and **period tags**. For year tags, use `year xxxx` format, i.e., `year 2021`. For period tags, please refer to the following table:
181
+
182
+ | Year Range | Period tag |
183
+ | :--------- | :--------- |
184
+ | 2005-2010 | old |
185
+ | 2011-2014 | early |
186
+ | 2014-2017 | mid |
187
+ | 2018-2020 | recent |
188
+ | 2021-2024 | newest |
189
+
190
+ ## Dataset
191
+
192
+ - The latest Danbooru images up to the training date (approximately before 2024-10-23)
193
+ - E621 images [e621-2024-webp-4Mpixel](https://huggingface.co/datasets/NebulaeWis/e621-2024-webp-4Mpixel) dataset on Hugging Face
194
+
195
+ **Communication**
196
+
197
+ - **QQ Groups:**
198
+
199
+ - 875042008
200
+ - 914818692
201
+ - 635772191
202
+
203
+ - **Discord:** [Laxhar Dream Lab SDXL NOOB](https://discord.com/invite/DKnFjKEEvH)
204
+
205
+ **How to train a LoRA on v-pred SDXL model**
206
+
207
+ A tutorial is intended for LoRA trainers based on sd-scripts.
208
+
209
+ article link: https://civitai.com/articles/8723
210
+
211
+ **Utility Tool**
212
+
213
+ Laxhar Lab is training a dedicated ControlNet model for NoobXL, and the models are being released progressively. So far, the normal, depth, and canny have been released.
214
+
215
+ Model link: https://civitai.com/models/929685
216
+
217
+ # Model License
218
+
219
+ This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.
220
+
221
+ ## I. Usage Restrictions
222
+
223
+ - Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.
224
+ - Prohibited generation of unethical or offensive content.
225
+ - Prohibited violation of laws and regulations in the user's jurisdiction.
226
+
227
+ ## II. Commercial Prohibition
228
+
229
+ We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.
230
+
231
+ ## III. Open Source Community
232
+
233
+ To foster a thriving open-source community,users MUST comply with the following requirements:
234
+
235
+ - Open source derivative models, merged models, LoRAs, and products based on the above models.
236
+ - Share work details such as synthesis formulas, prompts, and workflows.
237
+ - Follow the fair-ai-public-license to ensure derivative works remain open source.
238
+
239
+ ## IV. Disclaimer
240
+
241
+ Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.
242
+
243
+ # Participants and Contributors
244
+
245
+ ## Participants
246
+
247
+ - **L_A_X:** [Civitai](https://civitai.com/user/L_A_X) | [Liblib.art](https://www.liblib.art/userpage/9e1b16538b9657f2a737e9c2c6ebfa69) | [Huggingface](https://huggingface.co/LAXMAYDAY)
248
+ - **li_li:** [Civitai](https://civitai.com/user/li_li) | [Huggingface](https://huggingface.co/heziiiii)
249
+ - **nebulae:** [Civitai](https://civitai.com/user/kitarz) | [Huggingface](https://huggingface.co/NebulaeWis)
250
+ - **Chenkin:** [Civitai](https://civitai.com/user/Chenkin) | [Huggingface](https://huggingface.co/windsingai)
251
+ - **Euge:** [Civitai](https://civitai.com/user/Euge_) | [Huggingface](https://huggingface.co/Eugeoter) | [Github](https://github.com/Eugeoter)
252
+
253
+ ## Contributors
254
+
255
+ - **Narugo1992**: Thanks to [narugo1992](https://github.com/narugo1992) and the [deepghs](https://huggingface.co/deepghs) team for open-sourcing various training sets, image processing tools, and models.
256
+
257
+ - **Mikubill**: Thanks to [Mikubill](https://github.com/Mikubill) for the [Naifu](https://github.com/Mikubill/naifu) trainer.
258
+
259
+ - **Onommai**: Thanks to [OnommAI](https://onomaai.com/) for open-sourcing a powerful base model.
260
+
261
+ - **V-Prediction**: Thanks to the following individuals for their detailed instructions and experiments.
262
+
263
+ - adsfssdf
264
+ - [bluvoll](https://civitai.com/user/bluvoll)
265
+ - [bvhari](https://github.com/bvhari)
266
+ - [catboxanon](https://github.com/catboxanon)
267
+ - [parsee-mizuhashi](https://huggingface.co/parsee-mizuhashi)
268
+ - [very-aesthetic](https://github.com/very-aesthetic)
269
+ - [momoura](https://civitai.com/user/momoura)
270
+ - madmanfourohfour
271
+
272
+ - **Community**: [aria1th261](https://civitai.com/user/aria1th261), [neggles](https://github.com/neggles/neurosis), [sdtana](https://huggingface.co/sdtana), [chewing](https://huggingface.co/chewing), [irldoggo](https://github.com/irldoggo), [reoe](https://huggingface.co/reoe), [kblueleaf](https://civitai.com/user/kblueleaf), [Yidhar](https://github.com/Yidhar), ageless, 白玲可, Creeper, KaerMorh, 吟游诗人, SeASnAkE, [zwh20081](https://civitai.com/user/zwh20081), Wenaka⁧~喵, 稀里哗啦, 幸运二副, 昨日の約, 445, [EBIX](https://civitai.com/user/EBIX), [Sopp](https://huggingface.co/goyishsoyish), [Y_X](https://civitai.com/user/Y_X), [Minthybasis](https://civitai.com/user/Minthybasis), [Rakosz](https://civitai.com/user/Rakosz)