|
06/06/2023 08:33:12 AM Seed: 555 |
|
06/06/2023 08:33:12 AM unet attention_head_dim: 8 |
|
06/06/2023 08:33:12 AM Inferred yaml: v1-inference.yaml, attn: sd1, prediction_type: epsilon |
|
06/06/2023 08:33:27 AM Enabled xformers |
|
06/06/2023 08:33:28 AM Successfully compiled models |
|
06/06/2023 08:33:28 AM * DLMA resolution 512, buckets: [[512, 512], [576, 448], [448, 576], [640, 384], [384, 640], [768, 320], [320, 768], [896, 256], [256, 896], [1024, 256], [256, 1024]] |
|
06/06/2023 08:33:28 AM Preloading images... |
|
06/06/2023 08:38:21 AM * Removed 1628 images from the training set to use for validation |
|
06/06/2023 08:38:21 AM * DLMA initialized with 1628 images. |
|
06/06/2023 08:38:22 AM ** Dataset 'val': 411 batches, num_images: 1644, batch_size: 4 |
|
06/06/2023 08:38:22 AM * [91mAspect ratio bucket (256, 896) has only 1 images[0m. At batch size 4 this makes for an effective multiplier of 4.0, which may cause problems. Consider adding 3 or more images for aspect ratio 2:7, or reducing your batch_size. |
|
06/06/2023 08:38:22 AM * DLMA initialized with 9223 images. |
|
06/06/2023 08:38:22 AM ** Dataset 'train': 2310 batches, num_images: 9240, batch_size: 4 |
|
06/06/2023 08:38:22 AM [36m * text encoder optimizer: AdamW (196 parameters) *[0m |
|
06/06/2023 08:38:22 AM [36m lr: 1.5e-07, betas: [0.9, 0.999], epsilon: 1e-08, weight_decay: 0.01 *[0m |
|
06/06/2023 08:38:22 AM [36m * unet optimizer: AdamW (686 parameters) *[0m |
|
06/06/2023 08:38:22 AM [36m lr: 5e-08, betas: [0.9, 0.999], epsilon: 1e-08, weight_decay: 0.01 *[0m |
|
06/06/2023 08:38:22 AM Grad scaler enabled: True (amp mode) |
|
06/06/2023 08:38:22 AM Pretraining GPU Memory: 7007 / 24576 MB |
|
06/06/2023 08:38:22 AM saving ckpts every 1000000000.0 minutes |
|
06/06/2023 08:38:22 AM saving ckpts every 25 epochs |
|
06/06/2023 08:38:22 AM unet device: cuda:0, precision: torch.float32, training: True |
|
06/06/2023 08:38:22 AM text_encoder device: cuda:0, precision: torch.float32, training: True |
|
06/06/2023 08:38:22 AM vae device: cuda:0, precision: torch.float16, training: False |
|
06/06/2023 08:38:22 AM scheduler: <class 'diffusers.schedulers.scheduling_ddpm.DDPMScheduler'> |
|
06/06/2023 08:38:22 AM [32mProject name: [0m[92mvodka_v4_2[0m |
|
06/06/2023 08:38:22 AM [32mgrad_accum: [0m[92m1[0m |
|
06/06/2023 08:38:22 AM [32mbatch_size: [0m[92m4[0m |
|
06/06/2023 08:38:22 AM [32mepoch_len: [92m2310[0m |
|
06/07/2023 02:35:05 AM Saving model, 25 epochs at step 57750 |
|
06/07/2023 02:35:05 AM * Saving diffusers model to logs/vodka_v4_2_20230606-083312/ckpts/vodka_v4_2-ep25-gs57750 |
|
06/07/2023 02:35:10 AM * Saving SD model to ./vodka_v4_2-ep25-gs57750.ckpt |
|
06/07/2023 06:18:16 PM Saving model, 25 epochs at step 115500 |
|
06/07/2023 06:18:16 PM * Saving diffusers model to logs/vodka_v4_2_20230606-083312/ckpts/vodka_v4_2-ep50-gs115500 |
|
06/07/2023 06:18:31 PM * Saving SD model to ./vodka_v4_2-ep50-gs115500.ckpt |
|
06/08/2023 11:19:53 AM Saving model, 25 epochs at step 173250 |
|
06/08/2023 11:19:53 AM * Saving diffusers model to logs/vodka_v4_2_20230606-083312/ckpts/vodka_v4_2-ep75-gs173250 |
|
06/08/2023 11:20:17 AM * Saving SD model to ./vodka_v4_2-ep75-gs173250.ckpt |
|
06/09/2023 03:45:11 AM * Saving diffusers model to logs/vodka_v4_2_20230606-083312/ckpts/last-vodka_v4_2-ep99-gs231000 |
|
06/09/2023 03:45:15 AM * Saving SD model to ./last-vodka_v4_2-ep99-gs231000.ckpt |
|
06/09/2023 03:45:33 AM [36mTraining complete[0m |
|
06/09/2023 03:45:33 AM Total training time took 4027.18 minutes, total steps: 231000 |
|
06/09/2023 03:45:33 AM Average epoch time: 40.23 minutes |
|
06/09/2023 03:45:33 AM [97m ***************************[0m |
|
06/09/2023 03:45:33 AM [97m **** Finished training ****[0m |
|
06/09/2023 03:45:33 AM [97m ***************************[0m |
|
|