Text-to-Image
English
arcaillous-xl / README.md
Bedovyy's picture
Update README.md
dc6b7f7 verified
|
raw
history blame
2.63 kB
metadata
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
  - en
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
datasets:
  - pls2000/aiart_channel_nai3_geachu
pipeline_tag: text-to-image

Training (arcaillous-xl.safetensors)

Configuration refered from KBlueLeaf/Kohaku-XL-Zeta, using 2x3090 and sd-scripts.

Command args

NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 accelerate launch --num_cpu_threads_per_process 4 sdxl_train.py \
        --pretrained_model_name_or_path="/ai/data/sd/models/Stable-diffusion/Illustrious-XL-v0.1.safetensors" \
        --dataset_config="arcaillous-xl.toml" \
        --output_dir="results/ckpt" --output_name="arcaillous-xl" \
        --save_model_as="safetensors" \
        --gradient_accumulation_steps 64 \
        --learning_rate=1e-5 --optimizer_type="Lion8bit" \
        --lr_scheduler="constant_with_warmup" --lr_warmup_steps 100 --optimizer_args "weight_decay=0.01" "betas=0.9,0.95" --min_snr_gamma 5 \
        --sdpa \
        --no_half_vae \
        --cache_latents --cache_latents_to_disk \
        --gradient_checkpointing \
        --full_bf16 --mixed_precision="bf16" --save_precision="bf16" \
        --ddp_timeout=10000000 \
        --max_train_epochs 4 --save_every_n_epochs 1 --save_every_n_steps 50 \

Lora Training (lora_arcain.safetensors)

Results are simillar with arcaillous-xl with Illustrious-xl, but this lora can applied with other ILXL-based models such as NoobAI-XL.

Configuration is simillar to arcaillous-xl.

NCCL_P2P_DISABLE=1 NCCL_IB_DISABLE=1 accelerate launch --num_cpu_threads_per_process 4 sdxl_train_network.py \
        --network_train_unet_only \
        --network_module="networks.lora" --network_dim 256 --network_alpha 128 \
        --pretrained_model_name_or_path="/ai/data/sd/models/Stable-diffusion/Illustrious-XL-v0.1.safetensors" \
        --dataset_config="arcain.lora.toml" \
        --output_dir="results/lora" --output_name="lora_arcain" \
        --save_model_as="safetensors" \
        --gradient_accumulation_steps 32 \
        --learning_rate=1e-5 --optimizer_type="Lion8bit" \
        --lr_scheduler="constant_with_warmup" --lr_warmup_steps 100 --optimizer_args "weight_decay=0.01" "betas=0.9,0.95" --min_snr_gamma 5 \
        --sdpa \
        --no_half_vae \
        --cache_latents --cache_latents_to_disk \
        --gradient_checkpointing \
        --full_bf16 --mixed_precision="bf16" --save_precision="bf16" \
        --ddp_timeout=10000000 \
        --max_train_epochs 4 --save_every_n_epochs 1 --save_every_n_steps 50 \