ujin-song commited on
Commit
4fba92e
1 Parent(s): 8e12b4e

upload experiments folder

Browse files

subdir: 'multi-concept'(empty) and 'single-concept' with 'elsa' and 'moana' folder

experiments/.DS_Store ADDED
Binary file (6.15 kB). View file
 
experiments/single-concept/.DS_Store ADDED
Binary file (6.15 kB). View file
 
experiments/single-concept/elsa/.DS_Store ADDED
Binary file (6.15 kB). View file
 
experiments/single-concept/elsa/0022_elsa_ortho.yml ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GENERATE TIME: Fri May 24 08:49:55 2024
2
+ # CMD:
3
+ # train_edlora.py -opt ortho_datasets/train_configs/ortho/0022_elsa_ortho.yml
4
+
5
+ name: 0022_elsa_ortho
6
+ manual_seed: 1022
7
+ mixed_precision: fp16
8
+ gradient_accumulation_steps: 1
9
+
10
+ # dataset and data loader settings
11
+ datasets:
12
+ train:
13
+ name: LoraDataset
14
+ concept_list: ortho_datasets/data_configs/elsa.json
15
+ use_caption: true
16
+ use_mask: true
17
+ instance_transform:
18
+ - { type: HumanResizeCropFinalV3, size: 512, crop_p: 0.5 }
19
+ - { type: ToTensor }
20
+ - { type: Normalize, mean: [ 0.5 ], std: [ 0.5 ] }
21
+ - { type: ShuffleCaption, keep_token_num: 1 }
22
+ - { type: EnhanceText, enhance_type: human }
23
+ replace_mapping:
24
+ <TOK>: <elsa1> <elsa2>
25
+ batch_size_per_gpu: 2
26
+ dataset_enlarge_ratio: 500
27
+
28
+ val_vis:
29
+ name: PromptDataset
30
+ prompts: datasets/validation_prompts/single-concept/characters/test_girl.txt
31
+ num_samples_per_prompt: 8
32
+ latent_size: [ 4,64,64 ]
33
+ replace_mapping:
34
+ <TOK>: <elsa1> <elsa2>
35
+ batch_size_per_gpu: 4
36
+
37
+ models:
38
+ pretrained_path: nitrosocke/mo-di-diffusion
39
+ enable_edlora: true # true means ED-LoRA, false means vanilla LoRA
40
+ finetune_cfg:
41
+ text_embedding:
42
+ enable_tuning: true
43
+ lr: !!float 1e-3
44
+ text_encoder:
45
+ enable_tuning: true
46
+ lora_cfg:
47
+ rank: 5
48
+ alpha: 1.0
49
+ where: CLIPAttention
50
+ lr: !!float 1e-5
51
+ unet:
52
+ enable_tuning: true
53
+ lora_cfg:
54
+ rank: 5
55
+ alpha: 1.0
56
+ where: Attention
57
+ lr: !!float 1e-4
58
+ new_concept_token: <elsa1>+<elsa2>
59
+ initializer_token: <rand-0.013>+man
60
+ noise_offset: 0.01
61
+ attn_reg_weight: 0.01
62
+ reg_full_identity: false
63
+ use_mask_loss: true
64
+ gradient_checkpoint: false
65
+ enable_xformers: true
66
+
67
+ # path
68
+ path:
69
+ pretrain_network: ~
70
+
71
+ # training settings
72
+ train:
73
+ optim_g:
74
+ type: AdamW
75
+ lr: !!float 0.0 # no use since we define different component lr in model
76
+ weight_decay: 0.01
77
+ betas: [ 0.9, 0.999 ] # align with taming
78
+
79
+ # dropkv
80
+ unet_kv_drop_rate: 0
81
+ scheduler: linear
82
+ emb_norm_threshold: !!float 5.5e-1
83
+
84
+ # validation settings
85
+ val:
86
+ val_during_save: true
87
+ compose_visualize: true
88
+ alpha_list: [0, 0.7, 1.0] # 0 means only visualize embedding (without lora weight)
89
+ sample:
90
+ num_inference_steps: 50
91
+ guidance_scale: 7.5
92
+
93
+ # logging settings
94
+ logger:
95
+ print_freq: 10
96
+ save_checkpoint_freq: !!float 10000
experiments/single-concept/elsa/models/edlora_model-latest.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8881600a7977e5baf6b5cbabf4dbe83b5f2d2767b26d780ada566bbc4259092
3
+ size 35173046
experiments/single-concept/elsa/train_0022_elsa_ortho_20240524_084955.log ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-24 08:49:55,814 INFO: Distributed environment: MULTI_GPU Backend: nccl
2
+ Num processes: 2
3
+ Process index: 0
4
+ Local process index: 0
5
+ Device: cuda:0
6
+
7
+ Mixed precision type: fp16
8
+
9
+ 2024-05-24 08:49:55,814 INFO:
10
+ name: 0022_elsa_ortho
11
+ manual_seed: 1022
12
+ mixed_precision: fp16
13
+ gradient_accumulation_steps: 1
14
+ datasets:[
15
+ train:[
16
+ name: LoraDataset
17
+ concept_list: ortho_datasets/data_configs/elsa.json
18
+ use_caption: True
19
+ use_mask: True
20
+ instance_transform: [{'type': 'HumanResizeCropFinalV3', 'size': 512, 'crop_p': 0.5}, {'type': 'ToTensor'}, {'type': 'Normalize', 'mean': [0.5], 'std': [0.5]}, {'type': 'ShuffleCaption', 'keep_token_num': 1}, {'type': 'EnhanceText', 'enhance_type': 'human'}]
21
+ replace_mapping:[
22
+ <TOK>: <elsa1> <elsa2>
23
+ ]
24
+ batch_size_per_gpu: 2
25
+ dataset_enlarge_ratio: 500
26
+ ]
27
+ val_vis:[
28
+ name: PromptDataset
29
+ prompts: datasets/validation_prompts/single-concept/characters/test_girl.txt
30
+ num_samples_per_prompt: 8
31
+ latent_size: [4, 64, 64]
32
+ replace_mapping:[
33
+ <TOK>: <elsa1> <elsa2>
34
+ ]
35
+ batch_size_per_gpu: 4
36
+ ]
37
+ ]
38
+ models:[
39
+ pretrained_path: nitrosocke/mo-di-diffusion
40
+ enable_edlora: True
41
+ finetune_cfg:[
42
+ text_embedding:[
43
+ enable_tuning: True
44
+ lr: 0.001
45
+ ]
46
+ text_encoder:[
47
+ enable_tuning: True
48
+ lora_cfg:[
49
+ rank: 5
50
+ alpha: 1.0
51
+ where: CLIPAttention
52
+ ]
53
+ lr: 1e-05
54
+ ]
55
+ unet:[
56
+ enable_tuning: True
57
+ lora_cfg:[
58
+ rank: 5
59
+ alpha: 1.0
60
+ where: Attention
61
+ ]
62
+ lr: 0.0001
63
+ ]
64
+ ]
65
+ new_concept_token: <elsa1>+<elsa2>
66
+ initializer_token: <rand-0.013>+man
67
+ noise_offset: 0.01
68
+ attn_reg_weight: 0.01
69
+ reg_full_identity: False
70
+ use_mask_loss: True
71
+ gradient_checkpoint: False
72
+ enable_xformers: True
73
+ ]
74
+ path:[
75
+ pretrain_network: None
76
+ experiments_root: /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho
77
+ models: /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho/models
78
+ log: /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho
79
+ visualization: /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho/visualization
80
+ ]
81
+ train:[
82
+ optim_g:[
83
+ type: AdamW
84
+ lr: 0.0
85
+ weight_decay: 0.01
86
+ betas: [0.9, 0.999]
87
+ ]
88
+ unet_kv_drop_rate: 0
89
+ scheduler: linear
90
+ emb_norm_threshold: 0.55
91
+ ]
92
+ val:[
93
+ val_during_save: True
94
+ compose_visualize: True
95
+ alpha_list: [0, 0.7, 1.0]
96
+ sample:[
97
+ num_inference_steps: 50
98
+ guidance_scale: 7.5
99
+ ]
100
+ ]
101
+ logger:[
102
+ print_freq: 10
103
+ save_checkpoint_freq: 10000.0
104
+ ]
105
+ is_train: True
106
+
107
+ 2024-05-24 08:50:00,274 INFO: <elsa1> (49408-49423) is random initialized by: <rand-0.013>
108
+ 2024-05-24 08:50:00,591 INFO: <elsa2> (49424-49439) is random initialized by existing token (man): 786
109
+ 2024-05-24 08:50:00,596 INFO: optimizing embedding using lr: 0.001
110
+ 2024-05-24 08:50:00,677 INFO: optimizing text_encoder (48 LoRAs), using lr: 1e-05
111
+ 2024-05-24 08:50:00,954 INFO: optimizing unet (128 LoRAs), using lr: 0.0001
112
+ 2024-05-24 08:50:02,404 INFO: ***** Running training *****
113
+ 2024-05-24 08:50:02,404 INFO: Num examples = 3000
114
+ 2024-05-24 08:50:02,404 INFO: Instantaneous batch size per device = 2
115
+ 2024-05-24 08:50:02,404 INFO: Total train batch size (w. parallel, distributed & accumulation) = 4
116
+ 2024-05-24 08:50:02,404 INFO: Total optimization steps = 750.0
117
+ 2024-05-24 08:50:12,424 INFO: [0022_..][Iter: 10, lr:(9.867e-04,9.867e-06,9.867e-05,)] [eta: 0:11:13] loss: 1.2587e+00 Norm_mean: 3.5817e-01
118
+ 2024-05-24 08:50:20,263 INFO: [0022_..][Iter: 20, lr:(9.733e-04,9.733e-06,9.733e-05,)] [eta: 0:10:19] loss: 8.7049e-01 Norm_mean: 3.7253e-01
119
+ 2024-05-24 08:50:27,816 INFO: [0022_..][Iter: 30, lr:(9.600e-04,9.600e-06,9.600e-05,)] [eta: 0:09:49] loss: 1.8845e+00 Norm_mean: 3.8294e-01
120
+ 2024-05-24 08:50:35,431 INFO: [0022_..][Iter: 40, lr:(9.467e-04,9.467e-06,9.467e-05,)] [eta: 0:09:31] loss: 9.8640e-02 Norm_mean: 3.9094e-01
121
+ 2024-05-24 08:50:42,913 INFO: [0022_..][Iter: 50, lr:(9.333e-04,9.333e-06,9.333e-05,)] [eta: 0:09:15] loss: 2.4037e-01 Norm_mean: 3.9714e-01
122
+ 2024-05-24 08:50:50,343 INFO: [0022_..][Iter: 60, lr:(9.200e-04,9.200e-06,9.200e-05,)] [eta: 0:09:01] loss: 4.7384e-01 Norm_mean: 4.0230e-01
123
+ 2024-05-24 08:50:58,098 INFO: [0022_..][Iter: 70, lr:(9.067e-04,9.067e-06,9.067e-05,)] [eta: 0:08:52] loss: 3.2908e-01 Norm_mean: 4.0708e-01
124
+ 2024-05-24 08:51:05,707 INFO: [0022_..][Iter: 80, lr:(8.933e-04,8.933e-06,8.933e-05,)] [eta: 0:08:42] loss: 7.9968e-01 Norm_mean: 4.1264e-01
125
+ 2024-05-24 08:51:13,056 INFO: [0022_..][Iter: 90, lr:(8.800e-04,8.800e-06,8.800e-05,)] [eta: 0:08:31] loss: 8.3418e-02 Norm_mean: 4.1884e-01
126
+ 2024-05-24 08:51:20,886 INFO: [0022_..][Iter: 100, lr:(8.667e-04,8.667e-06,8.667e-05,)] [eta: 0:08:24] loss: 2.8333e-01 Norm_mean: 4.2442e-01
127
+ 2024-05-24 08:51:28,637 INFO: [0022_..][Iter: 110, lr:(8.533e-04,8.533e-06,8.533e-05,)] [eta: 0:08:16] loss: 1.3183e+00 Norm_mean: 4.2854e-01
128
+ 2024-05-24 08:51:35,538 INFO: [0022_..][Iter: 120, lr:(8.400e-04,8.400e-06,8.400e-05,)] [eta: 0:08:04] loss: 4.3494e-01 Norm_mean: 4.3216e-01
129
+ 2024-05-24 08:51:43,352 INFO: [0022_..][Iter: 130, lr:(8.267e-04,8.267e-06,8.267e-05,)] [eta: 0:07:56] loss: 5.4238e-01 Norm_mean: 4.3511e-01
130
+ 2024-05-24 08:51:51,530 INFO: [0022_..][Iter: 140, lr:(8.133e-04,8.133e-06,8.133e-05,)] [eta: 0:07:51] loss: 4.1099e-01 Norm_mean: 4.3870e-01
131
+ 2024-05-24 08:51:59,472 INFO: [0022_..][Iter: 150, lr:(8.000e-04,8.000e-06,8.000e-05,)] [eta: 0:07:44] loss: 1.0146e+00 Norm_mean: 4.4381e-01
132
+ 2024-05-24 08:52:07,090 INFO: [0022_..][Iter: 160, lr:(7.867e-04,7.867e-06,7.867e-05,)] [eta: 0:07:36] loss: 2.4690e-01 Norm_mean: 4.4945e-01
133
+ 2024-05-24 08:52:14,838 INFO: [0022_..][Iter: 170, lr:(7.733e-04,7.733e-06,7.733e-05,)] [eta: 0:07:28] loss: 2.7580e-01 Norm_mean: 4.5369e-01
134
+ 2024-05-24 08:52:22,332 INFO: [0022_..][Iter: 180, lr:(7.600e-04,7.600e-06,7.600e-05,)] [eta: 0:07:19] loss: 1.9310e-01 Norm_mean: 4.5661e-01
135
+ 2024-05-24 08:52:29,932 INFO: [0022_..][Iter: 190, lr:(7.467e-04,7.467e-06,7.467e-05,)] [eta: 0:07:11] loss: 1.1666e+00 Norm_mean: 4.6024e-01
136
+ 2024-05-24 08:52:37,632 INFO: [0022_..][Iter: 200, lr:(7.333e-04,7.333e-06,7.333e-05,)] [eta: 0:07:03] loss: 5.9502e-02 Norm_mean: 4.6344e-01
137
+ 2024-05-24 08:52:45,449 INFO: [0022_..][Iter: 210, lr:(7.200e-04,7.200e-06,7.200e-05,)] [eta: 0:06:56] loss: 3.9958e-01 Norm_mean: 4.6653e-01
138
+ 2024-05-24 08:52:53,375 INFO: [0022_..][Iter: 220, lr:(7.067e-04,7.067e-06,7.067e-05,)] [eta: 0:06:49] loss: 5.6507e-01 Norm_mean: 4.6919e-01
139
+ 2024-05-24 08:53:00,317 INFO: [0022_..][Iter: 230, lr:(6.933e-04,6.933e-06,6.933e-05,)] [eta: 0:06:39] loss: 7.1709e-01 Norm_mean: 4.7232e-01
140
+ 2024-05-24 08:53:07,303 INFO: [0022_..][Iter: 240, lr:(6.800e-04,6.800e-06,6.800e-05,)] [eta: 0:06:30] loss: 5.2934e-01 Norm_mean: 4.7492e-01
141
+ 2024-05-24 08:53:15,167 INFO: [0022_..][Iter: 250, lr:(6.667e-04,6.667e-06,6.667e-05,)] [eta: 0:06:23] loss: 2.4139e-01 Norm_mean: 4.7698e-01
142
+ 2024-05-24 08:53:22,671 INFO: [0022_..][Iter: 260, lr:(6.533e-04,6.533e-06,6.533e-05,)] [eta: 0:06:15] loss: 9.6894e-01 Norm_mean: 4.7931e-01
143
+ 2024-05-24 08:53:30,109 INFO: [0022_..][Iter: 270, lr:(6.400e-04,6.400e-06,6.400e-05,)] [eta: 0:06:07] loss: 1.0858e-01 Norm_mean: 4.8147e-01
144
+ 2024-05-24 08:53:36,928 INFO: [0022_..][Iter: 280, lr:(6.267e-04,6.267e-06,6.267e-05,)] [eta: 0:05:58] loss: 9.4769e-02 Norm_mean: 4.8353e-01
145
+ 2024-05-24 08:53:44,626 INFO: [0022_..][Iter: 290, lr:(6.133e-04,6.133e-06,6.133e-05,)] [eta: 0:05:50] loss: 7.1380e-01 Norm_mean: 4.8565e-01
146
+ 2024-05-24 08:53:51,587 INFO: [0022_..][Iter: 300, lr:(6.000e-04,6.000e-06,6.000e-05,)] [eta: 0:05:41] loss: 7.5040e-01 Norm_mean: 4.8758e-01
147
+ 2024-05-24 08:53:58,569 INFO: [0022_..][Iter: 310, lr:(5.867e-04,5.867e-06,5.867e-05,)] [eta: 0:05:33] loss: 3.6233e-01 Norm_mean: 4.8935e-01
148
+ 2024-05-24 08:54:06,352 INFO: [0022_..][Iter: 320, lr:(5.733e-04,5.733e-06,5.733e-05,)] [eta: 0:05:26] loss: 4.8190e-01 Norm_mean: 4.9100e-01
149
+ 2024-05-24 08:54:13,366 INFO: [0022_..][Iter: 330, lr:(5.600e-04,5.600e-06,5.600e-05,)] [eta: 0:05:17] loss: 6.9878e-02 Norm_mean: 4.9276e-01
150
+ 2024-05-24 08:54:20,574 INFO: [0022_..][Iter: 340, lr:(5.467e-04,5.467e-06,5.467e-05,)] [eta: 0:05:09] loss: 6.4842e-01 Norm_mean: 4.9464e-01
151
+ 2024-05-24 08:54:27,821 INFO: [0022_..][Iter: 350, lr:(5.333e-04,5.333e-06,5.333e-05,)] [eta: 0:05:01] loss: 6.4891e-01 Norm_mean: 4.9647e-01
152
+ 2024-05-24 08:54:34,138 INFO: [0022_..][Iter: 360, lr:(5.200e-04,5.200e-06,5.200e-05,)] [eta: 0:04:52] loss: 2.6867e-01 Norm_mean: 4.9825e-01
153
+ 2024-05-24 08:54:42,063 INFO: [0022_..][Iter: 370, lr:(5.067e-04,5.067e-06,5.067e-05,)] [eta: 0:04:45] loss: 8.6926e-01 Norm_mean: 4.9982e-01
154
+ 2024-05-24 08:54:49,908 INFO: [0022_..][Iter: 380, lr:(4.933e-04,4.933e-06,4.933e-05,)] [eta: 0:04:38] loss: 4.8145e-01 Norm_mean: 5.0119e-01
155
+ 2024-05-24 08:54:57,804 INFO: [0022_..][Iter: 390, lr:(4.800e-04,4.800e-06,4.800e-05,)] [eta: 0:04:31] loss: 9.5628e-01 Norm_mean: 5.0271e-01
156
+ 2024-05-24 08:55:05,054 INFO: [0022_..][Iter: 400, lr:(4.667e-04,4.667e-06,4.667e-05,)] [eta: 0:04:23] loss: 6.7564e-02 Norm_mean: 5.0433e-01
157
+ 2024-05-24 08:55:12,055 INFO: [0022_..][Iter: 410, lr:(4.533e-04,4.533e-06,4.533e-05,)] [eta: 0:04:15] loss: 7.3947e-01 Norm_mean: 5.0588e-01
158
+ 2024-05-24 08:55:19,188 INFO: [0022_..][Iter: 420, lr:(4.400e-04,4.400e-06,4.400e-05,)] [eta: 0:04:07] loss: 5.3082e-01 Norm_mean: 5.0701e-01
159
+ 2024-05-24 08:55:26,681 INFO: [0022_..][Iter: 430, lr:(4.267e-04,4.267e-06,4.267e-05,)] [eta: 0:04:00] loss: 3.8347e-01 Norm_mean: 5.0823e-01
160
+ 2024-05-24 08:55:34,282 INFO: [0022_..][Iter: 440, lr:(4.133e-04,4.133e-06,4.133e-05,)] [eta: 0:03:52] loss: 2.2813e-02 Norm_mean: 5.0962e-01
161
+ 2024-05-24 08:55:41,965 INFO: [0022_..][Iter: 450, lr:(4.000e-04,4.000e-06,4.000e-05,)] [eta: 0:03:45] loss: 1.1481e+00 Norm_mean: 5.1106e-01
162
+ 2024-05-24 08:55:49,691 INFO: [0022_..][Iter: 460, lr:(3.867e-04,3.867e-06,3.867e-05,)] [eta: 0:03:37] loss: 8.9931e-02 Norm_mean: 5.1241e-01
163
+ 2024-05-24 08:55:57,556 INFO: [0022_..][Iter: 470, lr:(3.733e-04,3.733e-06,3.733e-05,)] [eta: 0:03:30] loss: 2.2061e-01 Norm_mean: 5.1351e-01
164
+ 2024-05-24 08:56:05,384 INFO: [0022_..][Iter: 480, lr:(3.600e-04,3.600e-06,3.600e-05,)] [eta: 0:03:22] loss: 1.0475e+00 Norm_mean: 5.1457e-01
165
+ 2024-05-24 08:56:12,447 INFO: [0022_..][Iter: 490, lr:(3.467e-04,3.467e-06,3.467e-05,)] [eta: 0:03:15] loss: 6.0158e-01 Norm_mean: 5.1546e-01
166
+ 2024-05-24 08:56:19,867 INFO: [0022_..][Iter: 500, lr:(3.333e-04,3.333e-06,3.333e-05,)] [eta: 0:03:07] loss: 1.8953e-01 Norm_mean: 5.1649e-01
167
+ 2024-05-24 08:56:27,398 INFO: [0022_..][Iter: 510, lr:(3.200e-04,3.200e-06,3.200e-05,)] [eta: 0:03:00] loss: 8.9298e-01 Norm_mean: 5.1761e-01
168
+ 2024-05-24 08:56:35,097 INFO: [0022_..][Iter: 520, lr:(3.067e-04,3.067e-06,3.067e-05,)] [eta: 0:02:52] loss: 8.7824e-01 Norm_mean: 5.1848e-01
169
+ 2024-05-24 08:56:42,783 INFO: [0022_..][Iter: 530, lr:(2.933e-04,2.933e-06,2.933e-05,)] [eta: 0:02:45] loss: 4.0614e-01 Norm_mean: 5.1931e-01
170
+ 2024-05-24 08:56:50,708 INFO: [0022_..][Iter: 540, lr:(2.800e-04,2.800e-06,2.800e-05,)] [eta: 0:02:37] loss: 1.1193e+00 Norm_mean: 5.2018e-01
171
+ 2024-05-24 08:56:58,195 INFO: [0022_..][Iter: 550, lr:(2.667e-04,2.667e-06,2.667e-05,)] [eta: 0:02:30] loss: 4.4657e-01 Norm_mean: 5.2078e-01
172
+ 2024-05-24 08:57:05,716 INFO: [0022_..][Iter: 560, lr:(2.533e-04,2.533e-06,2.533e-05,)] [eta: 0:02:22] loss: 1.7082e+00 Norm_mean: 5.2121e-01
173
+ 2024-05-24 08:57:13,378 INFO: [0022_..][Iter: 570, lr:(2.400e-04,2.400e-06,2.400e-05,)] [eta: 0:02:15] loss: 5.3018e-01 Norm_mean: 5.2166e-01
174
+ 2024-05-24 08:57:21,360 INFO: [0022_..][Iter: 580, lr:(2.267e-04,2.267e-06,2.267e-05,)] [eta: 0:02:07] loss: 1.8932e+00 Norm_mean: 5.2210e-01
175
+ 2024-05-24 08:57:28,783 INFO: [0022_..][Iter: 590, lr:(2.133e-04,2.133e-06,2.133e-05,)] [eta: 0:02:00] loss: 3.8593e-01 Norm_mean: 5.2250e-01
176
+ 2024-05-24 08:57:36,530 INFO: [0022_..][Iter: 600, lr:(2.000e-04,2.000e-06,2.000e-05,)] [eta: 0:01:52] loss: 1.0122e+00 Norm_mean: 5.2296e-01
177
+ 2024-05-24 08:57:43,785 INFO: [0022_..][Iter: 610, lr:(1.867e-04,1.867e-06,1.867e-05,)] [eta: 0:01:44] loss: 6.0211e-01 Norm_mean: 5.2335e-01
178
+ 2024-05-24 08:57:51,030 INFO: [0022_..][Iter: 620, lr:(1.733e-04,1.733e-06,1.733e-05,)] [eta: 0:01:37] loss: 4.2980e-02 Norm_mean: 5.2367e-01
179
+ 2024-05-24 08:57:57,850 INFO: [0022_..][Iter: 630, lr:(1.600e-04,1.600e-06,1.600e-05,)] [eta: 0:01:29] loss: 9.1195e-01 Norm_mean: 5.2391e-01
180
+ 2024-05-24 08:58:05,690 INFO: [0022_..][Iter: 640, lr:(1.467e-04,1.467e-06,1.467e-05,)] [eta: 0:01:22] loss: 1.2563e-01 Norm_mean: 5.2410e-01
181
+ 2024-05-24 08:58:12,981 INFO: [0022_..][Iter: 650, lr:(1.333e-04,1.333e-06,1.333e-05,)] [eta: 0:01:14] loss: 3.1517e-01 Norm_mean: 5.2424e-01
182
+ 2024-05-24 08:58:19,360 INFO: [0022_..][Iter: 660, lr:(1.200e-04,1.200e-06,1.200e-05,)] [eta: 0:01:06] loss: 1.3256e+00 Norm_mean: 5.2438e-01
183
+ 2024-05-24 08:58:27,015 INFO: [0022_..][Iter: 670, lr:(1.067e-04,1.067e-06,1.067e-05,)] [eta: 0:00:59] loss: 3.6326e-01 Norm_mean: 5.2453e-01
184
+ 2024-05-24 08:58:34,256 INFO: [0022_..][Iter: 680, lr:(9.333e-05,9.333e-07,9.333e-06,)] [eta: 0:00:51] loss: 8.1322e-02 Norm_mean: 5.2466e-01
185
+ 2024-05-24 08:58:41,265 INFO: [0022_..][Iter: 690, lr:(8.000e-05,8.000e-07,8.000e-06,)] [eta: 0:00:44] loss: 2.1985e+00 Norm_mean: 5.2479e-01
186
+ 2024-05-24 08:58:47,287 INFO: [0022_..][Iter: 700, lr:(6.667e-05,6.667e-07,6.667e-06,)] [eta: 0:00:36] loss: 8.5051e-02 Norm_mean: 5.2488e-01
187
+ 2024-05-24 08:58:54,372 INFO: [0022_..][Iter: 710, lr:(5.333e-05,5.333e-07,5.333e-06,)] [eta: 0:00:29] loss: 3.7023e-01 Norm_mean: 5.2493e-01
188
+ 2024-05-24 08:59:02,323 INFO: [0022_..][Iter: 720, lr:(4.000e-05,4.000e-07,4.000e-06,)] [eta: 0:00:21] loss: 1.6481e-02 Norm_mean: 5.2497e-01
189
+ 2024-05-24 08:59:10,000 INFO: [0022_..][Iter: 730, lr:(2.667e-05,2.667e-07,2.667e-06,)] [eta: 0:00:14] loss: 2.0079e-01 Norm_mean: 5.2499e-01
190
+ 2024-05-24 08:59:17,197 INFO: [0022_..][Iter: 740, lr:(1.333e-05,1.333e-07,1.333e-06,)] [eta: 0:00:06] loss: 1.7066e-01 Norm_mean: 5.2500e-01
191
+ 2024-05-24 08:59:24,228 INFO: [0022_..][Iter: 750, lr:(0.000e+00,0.000e+00,0.000e+00,)] [eta: 0:00:00] loss: 2.8842e-01 Norm_mean: 5.2500e-01
192
+ 2024-05-24 08:59:24,267 INFO: Save state to /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho/models/edlora_model-latest.pth
193
+ 2024-05-24 08:59:24,268 INFO: Start validation /home/data_guest/orthogonal_adaptation/experiments/0022_elsa_ortho/models/edlora_model-latest.pth:
experiments/single-concept/moana/.DS_Store ADDED
Binary file (6.15 kB). View file
 
experiments/single-concept/moana/0023_moana_ortho.yml ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GENERATE TIME: Fri May 24 09:19:37 2024
2
+ # CMD:
3
+ # train_edlora.py -opt ortho_datasets/train_configs/ortho/0023_moana_ortho.yml
4
+
5
+ name: 0023_moana_ortho
6
+ manual_seed: 1023
7
+ mixed_precision: fp16
8
+ gradient_accumulation_steps: 1
9
+
10
+ # dataset and data loader settings
11
+ datasets:
12
+ train:
13
+ name: LoraDataset
14
+ concept_list: ortho_datasets/data_configs/moana.json
15
+ use_caption: true
16
+ use_mask: true
17
+ instance_transform:
18
+ - { type: HumanResizeCropFinalV3, size: 512, crop_p: 0.5 }
19
+ - { type: ToTensor }
20
+ - { type: Normalize, mean: [ 0.5 ], std: [ 0.5 ] }
21
+ - { type: ShuffleCaption, keep_token_num: 1 }
22
+ - { type: EnhanceText, enhance_type: human }
23
+ replace_mapping:
24
+ <TOK>: <moana1> <moana2>
25
+ batch_size_per_gpu: 2
26
+ dataset_enlarge_ratio: 500
27
+
28
+ val_vis:
29
+ name: PromptDataset
30
+ prompts: datasets/validation_prompts/single-concept/characters/test_girl.txt
31
+ num_samples_per_prompt: 8
32
+ latent_size: [ 4,64,64 ]
33
+ replace_mapping:
34
+ <TOK>: <moana1> <moana2>
35
+ batch_size_per_gpu: 4
36
+
37
+ models:
38
+ pretrained_path: nitrosocke/mo-di-diffusion
39
+ enable_edlora: true # true means ED-LoRA, false means vanilla LoRA
40
+ finetune_cfg:
41
+ text_embedding:
42
+ enable_tuning: true
43
+ lr: !!float 1e-3
44
+ text_encoder:
45
+ enable_tuning: true
46
+ lora_cfg:
47
+ rank: 5
48
+ alpha: 1.0
49
+ where: CLIPAttention
50
+ lr: !!float 1e-5
51
+ unet:
52
+ enable_tuning: true
53
+ lora_cfg:
54
+ rank: 5
55
+ alpha: 1.0
56
+ where: Attention
57
+ lr: !!float 1e-4
58
+ new_concept_token: <moana1>+<moana2>
59
+ initializer_token: <rand-0.013>+man
60
+ noise_offset: 0.01
61
+ attn_reg_weight: 0.01
62
+ reg_full_identity: false
63
+ use_mask_loss: true
64
+ gradient_checkpoint: false
65
+ enable_xformers: true
66
+
67
+ # path
68
+ path:
69
+ pretrain_network: ~
70
+
71
+ # training settings
72
+ train:
73
+ optim_g:
74
+ type: AdamW
75
+ lr: !!float 0.0 # no use since we define different component lr in model
76
+ weight_decay: 0.01
77
+ betas: [ 0.9, 0.999 ] # align with taming
78
+
79
+ # dropkv
80
+ unet_kv_drop_rate: 0
81
+ scheduler: linear
82
+ emb_norm_threshold: !!float 5.5e-1
83
+
84
+ # validation settings
85
+ val:
86
+ val_during_save: true
87
+ compose_visualize: true
88
+ alpha_list: [0, 0.7, 1.0] # 0 means only visualize embedding (without lora weight)
89
+ sample:
90
+ num_inference_steps: 50
91
+ guidance_scale: 7.5
92
+
93
+ # logging settings
94
+ logger:
95
+ print_freq: 10
96
+ save_checkpoint_freq: !!float 10000
experiments/single-concept/moana/models/edlora_model-latest.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c773dc4f259039eef528a6ede49287baea1dfe62856fe50cf6caaec737fdf84d
3
+ size 35173046
experiments/single-concept/moana/train_0023_moana_ortho_20240524_091937.log ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-24 09:19:37,081 INFO: Distributed environment: MULTI_GPU Backend: nccl
2
+ Num processes: 2
3
+ Process index: 0
4
+ Local process index: 0
5
+ Device: cuda:0
6
+
7
+ Mixed precision type: fp16
8
+
9
+ 2024-05-24 09:19:37,081 INFO:
10
+ name: 0023_moana_ortho
11
+ manual_seed: 1023
12
+ mixed_precision: fp16
13
+ gradient_accumulation_steps: 1
14
+ datasets:[
15
+ train:[
16
+ name: LoraDataset
17
+ concept_list: ortho_datasets/data_configs/moana.json
18
+ use_caption: True
19
+ use_mask: True
20
+ instance_transform: [{'type': 'HumanResizeCropFinalV3', 'size': 512, 'crop_p': 0.5}, {'type': 'ToTensor'}, {'type': 'Normalize', 'mean': [0.5], 'std': [0.5]}, {'type': 'ShuffleCaption', 'keep_token_num': 1}, {'type': 'EnhanceText', 'enhance_type': 'human'}]
21
+ replace_mapping:[
22
+ <TOK>: <moana1> <moana2>
23
+ ]
24
+ batch_size_per_gpu: 2
25
+ dataset_enlarge_ratio: 500
26
+ ]
27
+ val_vis:[
28
+ name: PromptDataset
29
+ prompts: datasets/validation_prompts/single-concept/characters/test_girl.txt
30
+ num_samples_per_prompt: 8
31
+ latent_size: [4, 64, 64]
32
+ replace_mapping:[
33
+ <TOK>: <moana1> <moana2>
34
+ ]
35
+ batch_size_per_gpu: 4
36
+ ]
37
+ ]
38
+ models:[
39
+ pretrained_path: nitrosocke/mo-di-diffusion
40
+ enable_edlora: True
41
+ finetune_cfg:[
42
+ text_embedding:[
43
+ enable_tuning: True
44
+ lr: 0.001
45
+ ]
46
+ text_encoder:[
47
+ enable_tuning: True
48
+ lora_cfg:[
49
+ rank: 5
50
+ alpha: 1.0
51
+ where: CLIPAttention
52
+ ]
53
+ lr: 1e-05
54
+ ]
55
+ unet:[
56
+ enable_tuning: True
57
+ lora_cfg:[
58
+ rank: 5
59
+ alpha: 1.0
60
+ where: Attention
61
+ ]
62
+ lr: 0.0001
63
+ ]
64
+ ]
65
+ new_concept_token: <moana1>+<moana2>
66
+ initializer_token: <rand-0.013>+man
67
+ noise_offset: 0.01
68
+ attn_reg_weight: 0.01
69
+ reg_full_identity: False
70
+ use_mask_loss: True
71
+ gradient_checkpoint: False
72
+ enable_xformers: True
73
+ ]
74
+ path:[
75
+ pretrain_network: None
76
+ experiments_root: /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho
77
+ models: /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho/models
78
+ log: /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho
79
+ visualization: /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho/visualization
80
+ ]
81
+ train:[
82
+ optim_g:[
83
+ type: AdamW
84
+ lr: 0.0
85
+ weight_decay: 0.01
86
+ betas: [0.9, 0.999]
87
+ ]
88
+ unet_kv_drop_rate: 0
89
+ scheduler: linear
90
+ emb_norm_threshold: 0.55
91
+ ]
92
+ val:[
93
+ val_during_save: True
94
+ compose_visualize: True
95
+ alpha_list: [0, 0.7, 1.0]
96
+ sample:[
97
+ num_inference_steps: 50
98
+ guidance_scale: 7.5
99
+ ]
100
+ ]
101
+ logger:[
102
+ print_freq: 10
103
+ save_checkpoint_freq: 10000.0
104
+ ]
105
+ is_train: True
106
+
107
+ 2024-05-24 09:19:41,541 INFO: <moana1> (49408-49423) is random initialized by: <rand-0.013>
108
+ 2024-05-24 09:19:41,717 INFO: <moana2> (49424-49439) is random initialized by existing token (man): 786
109
+ 2024-05-24 09:19:41,720 INFO: optimizing embedding using lr: 0.001
110
+ 2024-05-24 09:19:41,768 INFO: optimizing text_encoder (48 LoRAs), using lr: 1e-05
111
+ 2024-05-24 09:19:41,983 INFO: optimizing unet (128 LoRAs), using lr: 0.0001
112
+ 2024-05-24 09:19:44,928 INFO: ***** Running training *****
113
+ 2024-05-24 09:19:44,928 INFO: Num examples = 3000
114
+ 2024-05-24 09:19:44,928 INFO: Instantaneous batch size per device = 2
115
+ 2024-05-24 09:19:44,928 INFO: Total train batch size (w. parallel, distributed & accumulation) = 4
116
+ 2024-05-24 09:19:44,928 INFO: Total optimization steps = 750.0
117
+ 2024-05-24 09:20:23,702 INFO: [0023_..][Iter: 10, lr:(9.867e-04,9.867e-06,9.867e-05,)] [eta: 0:43:24] loss: 2.7444e+00 Norm_mean: 3.7058e-01
118
+ 2024-05-24 09:20:30,728 INFO: [0023_..][Iter: 20, lr:(9.733e-04,9.733e-06,9.733e-05,)] [eta: 0:26:29] loss: 4.9992e-01 Norm_mean: 3.8656e-01
119
+ 2024-05-24 09:20:37,744 INFO: [0023_..][Iter: 30, lr:(9.600e-04,9.600e-06,9.600e-05,)] [eta: 0:20:24] loss: 4.6753e-01 Norm_mean: 3.9790e-01
120
+ 2024-05-24 09:20:44,680 INFO: [0023_..][Iter: 40, lr:(9.467e-04,9.467e-06,9.467e-05,)] [eta: 0:17:13] loss: 3.3917e-01 Norm_mean: 4.0640e-01
121
+ 2024-05-24 09:20:51,688 INFO: [0023_..][Iter: 50, lr:(9.333e-04,9.333e-06,9.333e-05,)] [eta: 0:15:15] loss: 7.7060e-01 Norm_mean: 4.1344e-01
122
+ 2024-05-24 09:20:57,899 INFO: [0023_..][Iter: 60, lr:(9.200e-04,9.200e-06,9.200e-05,)] [eta: 0:13:44] loss: 3.5283e-01 Norm_mean: 4.2051e-01
123
+ 2024-05-24 09:21:04,239 INFO: [0023_..][Iter: 70, lr:(9.067e-04,9.067e-06,9.067e-05,)] [eta: 0:12:38] loss: 1.4238e-01 Norm_mean: 4.2707e-01
124
+ 2024-05-24 09:21:10,581 INFO: [0023_..][Iter: 80, lr:(8.933e-04,8.933e-06,8.933e-05,)] [eta: 0:11:47] loss: 5.7928e-01 Norm_mean: 4.3243e-01
125
+ 2024-05-24 09:21:16,894 INFO: [0023_..][Iter: 90, lr:(8.800e-04,8.800e-06,8.800e-05,)] [eta: 0:11:05] loss: 4.8595e-01 Norm_mean: 4.3749e-01
126
+ 2024-05-24 09:21:23,243 INFO: [0023_..][Iter: 100, lr:(8.667e-04,8.667e-06,8.667e-05,)] [eta: 0:10:31] loss: 1.7987e-01 Norm_mean: 4.4195e-01
127
+ 2024-05-24 09:21:29,656 INFO: [0023_..][Iter: 110, lr:(8.533e-04,8.533e-06,8.533e-05,)] [eta: 0:10:02] loss: 8.3704e-01 Norm_mean: 4.4614e-01
128
+ 2024-05-24 09:21:36,571 INFO: [0023_..][Iter: 120, lr:(8.400e-04,8.400e-06,8.400e-05,)] [eta: 0:09:40] loss: 5.8279e-02 Norm_mean: 4.4991e-01
129
+ 2024-05-24 09:21:43,459 INFO: [0023_..][Iter: 130, lr:(8.267e-04,8.267e-06,8.267e-05,)] [eta: 0:09:20] loss: 5.7320e-01 Norm_mean: 4.5361e-01
130
+ 2024-05-24 09:21:50,420 INFO: [0023_..][Iter: 140, lr:(8.133e-04,8.133e-06,8.133e-05,)] [eta: 0:09:02] loss: 2.5813e-01 Norm_mean: 4.5746e-01
131
+ 2024-05-24 09:21:57,410 INFO: [0023_..][Iter: 150, lr:(8.000e-04,8.000e-06,8.000e-05,)] [eta: 0:08:45] loss: 8.5538e-01 Norm_mean: 4.6133e-01
132
+ 2024-05-24 09:22:04,341 INFO: [0023_..][Iter: 160, lr:(7.867e-04,7.867e-06,7.867e-05,)] [eta: 0:08:30] loss: 3.8647e-01 Norm_mean: 4.6461e-01
133
+ 2024-05-24 09:22:11,262 INFO: [0023_..][Iter: 170, lr:(7.733e-04,7.733e-06,7.733e-05,)] [eta: 0:08:15] loss: 2.6119e+00 Norm_mean: 4.6779e-01
134
+ 2024-05-24 09:22:16,832 INFO: [0023_..][Iter: 180, lr:(7.600e-04,7.600e-06,7.600e-05,)] [eta: 0:07:57] loss: 1.3671e+00 Norm_mean: 4.7065e-01
135
+ 2024-05-24 09:22:23,258 INFO: [0023_..][Iter: 190, lr:(7.467e-04,7.467e-06,7.467e-05,)] [eta: 0:07:43] loss: 8.4631e-01 Norm_mean: 4.7365e-01
136
+ 2024-05-24 09:22:29,803 INFO: [0023_..][Iter: 200, lr:(7.333e-04,7.333e-06,7.333e-05,)] [eta: 0:07:30] loss: 1.6186e+00 Norm_mean: 4.7753e-01
137
+ 2024-05-24 09:22:36,804 INFO: [0023_..][Iter: 210, lr:(7.200e-04,7.200e-06,7.200e-05,)] [eta: 0:07:19] loss: 1.7380e-01 Norm_mean: 4.8150e-01
138
+ 2024-05-24 09:22:43,713 INFO: [0023_..][Iter: 220, lr:(7.067e-04,7.067e-06,7.067e-05,)] [eta: 0:07:07] loss: 1.1360e+00 Norm_mean: 4.8552e-01
139
+ 2024-05-24 09:22:49,170 INFO: [0023_..][Iter: 230, lr:(6.933e-04,6.933e-06,6.933e-05,)] [eta: 0:06:53] loss: 3.3080e-01 Norm_mean: 4.8927e-01
140
+ 2024-05-24 09:22:55,042 INFO: [0023_..][Iter: 240, lr:(6.800e-04,6.800e-06,6.800e-05,)] [eta: 0:06:41] loss: 4.9993e-01 Norm_mean: 4.9236e-01
141
+ 2024-05-24 09:23:02,053 INFO: [0023_..][Iter: 250, lr:(6.667e-04,6.667e-06,6.667e-05,)] [eta: 0:06:31] loss: 6.0387e-01 Norm_mean: 4.9509e-01
142
+ 2024-05-24 09:23:08,981 INFO: [0023_..][Iter: 260, lr:(6.533e-04,6.533e-06,6.533e-05,)] [eta: 0:06:22] loss: 5.1055e-01 Norm_mean: 4.9863e-01
143
+ 2024-05-24 09:23:15,374 INFO: [0023_..][Iter: 270, lr:(6.400e-04,6.400e-06,6.400e-05,)] [eta: 0:06:11] loss: 1.0547e-01 Norm_mean: 5.0310e-01
144
+ 2024-05-24 09:23:21,849 INFO: [0023_..][Iter: 280, lr:(6.267e-04,6.267e-06,6.267e-05,)] [eta: 0:06:02] loss: 4.0255e-02 Norm_mean: 5.0702e-01
145
+ 2024-05-24 09:23:28,306 INFO: [0023_..][Iter: 290, lr:(6.133e-04,6.133e-06,6.133e-05,)] [eta: 0:05:52] loss: 1.0906e-01 Norm_mean: 5.1034e-01
146
+ 2024-05-24 09:23:35,170 INFO: [0023_..][Iter: 300, lr:(6.000e-04,6.000e-06,6.000e-05,)] [eta: 0:05:43] loss: 1.4252e+00 Norm_mean: 5.1348e-01
147
+ 2024-05-24 09:23:41,924 INFO: [0023_..][Iter: 310, lr:(5.867e-04,5.867e-06,5.867e-05,)] [eta: 0:05:34] loss: 7.0278e-02 Norm_mean: 5.1641e-01
148
+ 2024-05-24 09:23:48,274 INFO: [0023_..][Iter: 320, lr:(5.733e-04,5.733e-06,5.733e-05,)] [eta: 0:05:25] loss: 3.8147e-01 Norm_mean: 5.1858e-01
149
+ 2024-05-24 09:23:54,702 INFO: [0023_..][Iter: 330, lr:(5.600e-04,5.600e-06,5.600e-05,)] [eta: 0:05:16] loss: 1.7608e-01 Norm_mean: 5.2044e-01
150
+ 2024-05-24 09:24:01,424 INFO: [0023_..][Iter: 340, lr:(5.467e-04,5.467e-06,5.467e-05,)] [eta: 0:05:07] loss: 1.1170e+00 Norm_mean: 5.2224e-01
151
+ 2024-05-24 09:24:07,505 INFO: [0023_..][Iter: 350, lr:(5.333e-04,5.333e-06,5.333e-05,)] [eta: 0:04:58] loss: 7.1001e-01 Norm_mean: 5.2444e-01
152
+ 2024-05-24 09:24:13,023 INFO: [0023_..][Iter: 360, lr:(5.200e-04,5.200e-06,5.200e-05,)] [eta: 0:04:48] loss: 1.5879e+00 Norm_mean: 5.2653e-01
153
+ 2024-05-24 09:24:18,528 INFO: [0023_..][Iter: 370, lr:(5.067e-04,5.067e-06,5.067e-05,)] [eta: 0:04:39] loss: 2.5563e-01 Norm_mean: 5.2846e-01
154
+ 2024-05-24 09:24:24,972 INFO: [0023_..][Iter: 380, lr:(4.933e-04,4.933e-06,4.933e-05,)] [eta: 0:04:31] loss: 9.0899e-01 Norm_mean: 5.3051e-01
155
+ 2024-05-24 09:24:32,003 INFO: [0023_..][Iter: 390, lr:(4.800e-04,4.800e-06,4.800e-05,)] [eta: 0:04:23] loss: 1.2670e+00 Norm_mean: 5.3236e-01
156
+ 2024-05-24 09:24:38,936 INFO: [0023_..][Iter: 400, lr:(4.667e-04,4.667e-06,4.667e-05,)] [eta: 0:04:15] loss: 1.6520e+00 Norm_mean: 5.3406e-01
157
+ 2024-05-24 09:24:45,952 INFO: [0023_..][Iter: 410, lr:(4.533e-04,4.533e-06,4.533e-05,)] [eta: 0:04:08] loss: 3.7375e-01 Norm_mean: 5.3559e-01
158
+ 2024-05-24 09:24:52,748 INFO: [0023_..][Iter: 420, lr:(4.400e-04,4.400e-06,4.400e-05,)] [eta: 0:04:00] loss: 1.1693e+00 Norm_mean: 5.3698e-01
159
+ 2024-05-24 09:24:59,669 INFO: [0023_..][Iter: 430, lr:(4.267e-04,4.267e-06,4.267e-05,)] [eta: 0:03:52] loss: 1.1808e+00 Norm_mean: 5.3815e-01
160
+ 2024-05-24 09:25:06,638 INFO: [0023_..][Iter: 440, lr:(4.133e-04,4.133e-06,4.133e-05,)] [eta: 0:03:45] loss: 1.0359e-01 Norm_mean: 5.3950e-01
161
+ 2024-05-24 09:25:13,567 INFO: [0023_..][Iter: 450, lr:(4.000e-04,4.000e-06,4.000e-05,)] [eta: 0:03:37] loss: 8.0667e-02 Norm_mean: 5.4091e-01
162
+ 2024-05-24 09:25:20,520 INFO: [0023_..][Iter: 460, lr:(3.867e-04,3.867e-06,3.867e-05,)] [eta: 0:03:30] loss: 9.5960e-01 Norm_mean: 5.4276e-01
163
+ 2024-05-24 09:25:27,072 INFO: [0023_..][Iter: 470, lr:(3.733e-04,3.733e-06,3.733e-05,)] [eta: 0:03:22] loss: 6.8269e-02 Norm_mean: 5.4442e-01
164
+ 2024-05-24 09:25:34,060 INFO: [0023_..][Iter: 480, lr:(3.600e-04,3.600e-06,3.600e-05,)] [eta: 0:03:15] loss: 2.2930e-01 Norm_mean: 5.4567e-01
165
+ 2024-05-24 09:25:40,966 INFO: [0023_..][Iter: 490, lr:(3.467e-04,3.467e-06,3.467e-05,)] [eta: 0:03:07] loss: 2.2810e-01 Norm_mean: 5.4676e-01
166
+ 2024-05-24 09:25:47,481 INFO: [0023_..][Iter: 500, lr:(3.333e-04,3.333e-06,3.333e-05,)] [eta: 0:03:00] loss: 4.7360e-01 Norm_mean: 5.4776e-01
167
+ 2024-05-24 09:25:54,410 INFO: [0023_..][Iter: 510, lr:(3.200e-04,3.200e-06,3.200e-05,)] [eta: 0:02:52] loss: 8.2354e-01 Norm_mean: 5.4879e-01
168
+ 2024-05-24 09:26:00,504 INFO: [0023_..][Iter: 520, lr:(3.067e-04,3.067e-06,3.067e-05,)] [eta: 0:02:45] loss: 1.7603e-01 Norm_mean: 5.4975e-01
169
+ 2024-05-24 09:26:07,561 INFO: [0023_..][Iter: 530, lr:(2.933e-04,2.933e-06,2.933e-05,)] [eta: 0:02:37] loss: 2.8653e-01 Norm_mean: 5.5008e-01
170
+ 2024-05-24 09:26:14,605 INFO: [0023_..][Iter: 540, lr:(2.800e-04,2.800e-06,2.800e-05,)] [eta: 0:02:30] loss: 2.7579e-01 Norm_mean: 5.5008e-01
171
+ 2024-05-24 09:26:21,434 INFO: [0023_..][Iter: 550, lr:(2.667e-04,2.667e-06,2.667e-05,)] [eta: 0:02:23] loss: 9.1648e-02 Norm_mean: 5.5008e-01
172
+ 2024-05-24 09:26:28,427 INFO: [0023_..][Iter: 560, lr:(2.533e-04,2.533e-06,2.533e-05,)] [eta: 0:02:15] loss: 3.2573e-01 Norm_mean: 5.5008e-01
173
+ 2024-05-24 09:26:34,906 INFO: [0023_..][Iter: 570, lr:(2.400e-04,2.400e-06,2.400e-05,)] [eta: 0:02:08] loss: 2.1821e-01 Norm_mean: 5.5008e-01
174
+ 2024-05-24 09:26:41,312 INFO: [0023_..][Iter: 580, lr:(2.267e-04,2.267e-06,2.267e-05,)] [eta: 0:02:01] loss: 1.2252e+00 Norm_mean: 5.5008e-01
175
+ 2024-05-24 09:26:48,146 INFO: [0023_..][Iter: 590, lr:(2.133e-04,2.133e-06,2.133e-05,)] [eta: 0:01:53] loss: 9.0433e-01 Norm_mean: 5.5008e-01
176
+ 2024-05-24 09:26:55,184 INFO: [0023_..][Iter: 600, lr:(2.000e-04,2.000e-06,2.000e-05,)] [eta: 0:01:46] loss: 9.4442e-02 Norm_mean: 5.5008e-01
177
+ 2024-05-24 09:27:02,262 INFO: [0023_..][Iter: 610, lr:(1.867e-04,1.867e-06,1.867e-05,)] [eta: 0:01:39] loss: 2.2219e-01 Norm_mean: 5.5008e-01
178
+ 2024-05-24 09:27:09,306 INFO: [0023_..][Iter: 620, lr:(1.733e-04,1.733e-06,1.733e-05,)] [eta: 0:01:32] loss: 8.3554e-01 Norm_mean: 5.5008e-01
179
+ 2024-05-24 09:27:16,299 INFO: [0023_..][Iter: 630, lr:(1.600e-04,1.600e-06,1.600e-05,)] [eta: 0:01:25] loss: 5.3108e-01 Norm_mean: 5.5008e-01
180
+ 2024-05-24 09:27:23,348 INFO: [0023_..][Iter: 640, lr:(1.467e-04,1.467e-06,1.467e-05,)] [eta: 0:01:17] loss: 1.7645e-01 Norm_mean: 5.5008e-01
181
+ 2024-05-24 09:27:30,430 INFO: [0023_..][Iter: 650, lr:(1.333e-04,1.333e-06,1.333e-05,)] [eta: 0:01:10] loss: 3.1456e-01 Norm_mean: 5.5008e-01
182
+ 2024-05-24 09:27:37,475 INFO: [0023_..][Iter: 660, lr:(1.200e-04,1.200e-06,1.200e-05,)] [eta: 0:01:03] loss: 1.1743e+00 Norm_mean: 5.5008e-01
183
+ 2024-05-24 09:27:44,129 INFO: [0023_..][Iter: 670, lr:(1.067e-04,1.067e-06,1.067e-05,)] [eta: 0:00:56] loss: 6.5870e-01 Norm_mean: 5.5008e-01
184
+ 2024-05-24 09:27:50,855 INFO: [0023_..][Iter: 680, lr:(9.333e-05,9.333e-07,9.333e-06,)] [eta: 0:00:49] loss: 5.7135e-01 Norm_mean: 5.5008e-01
185
+ 2024-05-24 09:27:57,902 INFO: [0023_..][Iter: 690, lr:(8.000e-05,8.000e-07,8.000e-06,)] [eta: 0:00:42] loss: 2.4428e-01 Norm_mean: 5.5008e-01
186
+ 2024-05-24 09:28:04,974 INFO: [0023_..][Iter: 700, lr:(6.667e-05,6.667e-07,6.667e-06,)] [eta: 0:00:34] loss: 6.4063e-01 Norm_mean: 5.5008e-01
187
+ 2024-05-24 09:28:11,538 INFO: [0023_..][Iter: 710, lr:(5.333e-05,5.333e-07,5.333e-06,)] [eta: 0:00:27] loss: 3.8671e-01 Norm_mean: 5.5008e-01
188
+ 2024-05-24 09:28:18,071 INFO: [0023_..][Iter: 720, lr:(4.000e-05,4.000e-07,4.000e-06,)] [eta: 0:00:20] loss: 4.6350e-01 Norm_mean: 5.5008e-01
189
+ 2024-05-24 09:28:24,068 INFO: [0023_..][Iter: 730, lr:(2.667e-05,2.667e-07,2.667e-06,)] [eta: 0:00:13] loss: 1.2668e+00 Norm_mean: 5.5008e-01
190
+ 2024-05-24 09:28:29,706 INFO: [0023_..][Iter: 740, lr:(1.333e-05,1.333e-07,1.333e-06,)] [eta: 0:00:06] loss: 8.1621e-01 Norm_mean: 5.5008e-01
191
+ 2024-05-24 09:28:36,604 INFO: [0023_..][Iter: 750, lr:(0.000e+00,0.000e+00,0.000e+00,)] [eta: 0:00:00] loss: 5.5807e-01 Norm_mean: 5.5008e-01
192
+ 2024-05-24 09:28:36,645 INFO: Save state to /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho/models/edlora_model-latest.pth
193
+ 2024-05-24 09:28:36,645 INFO: Start validation /home/data_guest/orthogonal_adaptation/experiments/0023_moana_ortho/models/edlora_model-latest.pth: