Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ datasets:
|
|
10 |
pipeline_tag: text-to-image
|
11 |
---
|
12 |
|
13 |
-
## Training
|
14 |
|
15 |
Configuration refered from KBlueLeaf/Kohaku-XL-Zeta, using 2x3090 and sd-scripts.
|
16 |
|
@@ -32,4 +32,30 @@ NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 accelerate launch --num_cpu_threads_per_pro
|
|
32 |
--full_bf16 --mixed_precision="bf16" --save_precision="bf16" \
|
33 |
--ddp_timeout=10000000 \
|
34 |
--max_train_epochs 4 --save_every_n_epochs 1 --save_every_n_steps 50 \
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|
|
|
10 |
pipeline_tag: text-to-image
|
11 |
---
|
12 |
|
13 |
+
## Training (`arcaillous-xl.safetensors`)
|
14 |
|
15 |
Configuration refered from KBlueLeaf/Kohaku-XL-Zeta, using 2x3090 and sd-scripts.
|
16 |
|
|
|
32 |
--full_bf16 --mixed_precision="bf16" --save_precision="bf16" \
|
33 |
--ddp_timeout=10000000 \
|
34 |
--max_train_epochs 4 --save_every_n_epochs 1 --save_every_n_steps 50 \
|
35 |
+
```
|
36 |
+
|
37 |
+
## Lora Training (`lora_arcain.safetensors`)
|
38 |
+
|
39 |
+
Results are simillar with arcaillous-xl with Illustrious-xl, but this lora can applied with other ILXL-based models such as NoobAI-XL.
|
40 |
+
|
41 |
+
Configuration is simillar to `arcaillous-xl`.
|
42 |
+
|
43 |
+
```
|
44 |
+
NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 accelerate launch --num_cpu_threads_per_process 4 sdxl_train_network.py \
|
45 |
+
--network_train_unet_only \
|
46 |
+
--network_module="networks.lora" --network_dim 256 --network_alpha 128 \
|
47 |
+
--pretrained_model_name_or_path="/ai/data/sd/models/Stable-diffusion/Illustrious-XL-v0.1.safetensors" \
|
48 |
+
--dataset_config="arcain.lora.toml" \
|
49 |
+
--output_dir="results/lora" --output_name="lora_arcain" \
|
50 |
+
--save_model_as="safetensors" \
|
51 |
+
--gradient_accumulation_steps 32 \
|
52 |
+
--learning_rate=1e-5 --optimizer_type="Lion8bit" \
|
53 |
+
--lr_scheduler="constant_with_warmup" --lr_warmup_steps 100 --optimizer_args "weight_decay=0.01" "betas=0.9,0.95" --min_snr_gamma 5 \
|
54 |
+
--sdpa \
|
55 |
+
--no_half_vae \
|
56 |
+
--cache_latents --cache_latents_to_disk \
|
57 |
+
--gradient_checkpointing \
|
58 |
+
--full_bf16 --mixed_precision="bf16" --save_precision="bf16" \
|
59 |
+
--ddp_timeout=10000000 \
|
60 |
+
--max_train_epochs 4 --save_every_n_epochs 1 --save_every_n_steps 50 \
|
61 |
```
|