|
/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat gen_config /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo --quantization q4f32_1 --conv-template codellama_instruct --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2 --context-window-size 16384 |
|
[2024-01-29 22:29:34] INFO auto_config.py:115: [92mFound[0m model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/config.json |
|
[2024-01-29 22:29:34] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. |
|
[2024-01-29 22:29:34] INFO llama_model.py:51: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (16384) |
|
[2024-01-29 22:29:34] INFO llama_model.py:71: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (16384) |
|
[2024-01-29 22:29:34] INFO config.py:106: Overriding [1mcontext_window_size[0m from 16384 to 16384 |
|
[2024-01-29 22:29:34] INFO config.py:106: Overriding [1mmax_batch_size[0m from 1 to 80 |
|
[2024-01-29 22:29:34] INFO gen_config.py:116: [generation_config.json] Setting [1mbos_token_id[0m: 1 |
|
[2024-01-29 22:29:34] INFO gen_config.py:116: [generation_config.json] Setting [1meos_token_id[0m: 2 |
|
[2024-01-29 22:29:34] INFO gen_config.py:128: [92mFound[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/tokenizer.model. Copying to [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/tokenizer.model[0m |
|
[2024-01-29 22:29:34] INFO gen_config.py:128: [92mFound[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/tokenizer.json. Copying to [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/tokenizer.json[0m |
|
[2024-01-29 22:29:34] INFO gen_config.py:130: [91mNot found[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/vocab.json |
|
[2024-01-29 22:29:34] INFO gen_config.py:130: [91mNot found[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/merges.txt |
|
[2024-01-29 22:29:34] INFO gen_config.py:130: [91mNot found[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/added_tokens.json |
|
[2024-01-29 22:29:34] INFO gen_config.py:128: [92mFound[0m tokenizer config: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/tokenizer_config.json. Copying to [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/tokenizer_config.json[0m |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mpad_token_id[0m: 0 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mtemperature[0m: 0.7 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mrepetition_penalty[0m: 1.0 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mtop_p[0m: 0.95 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mmean_gen_len[0m: 128 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mmax_gen_len[0m: 512 |
|
[2024-01-29 22:29:34] INFO gen_config.py:69: [System default] Setting [1mshift_fill_factor[0m: 0.3 |
|
[2024-01-29 22:29:34] INFO gen_config.py:158: Dumping configuration file to: [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/mlc-chat-config.json[0m |
|
/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_chat convert_weight /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo --quantization q4f32_1 --source-format auto --output /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2 |
|
[2024-01-29 22:29:35] INFO auto_config.py:115: [92mFound[0m model configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/config.json |
|
[2024-01-29 22:29:35] INFO auto_device.py:85: [91mNot found[0m device: cuda:0 |
|
[2024-01-29 22:29:35] INFO auto_device.py:85: [91mNot found[0m device: rocm:0 |
|
[2024-01-29 22:29:36] INFO auto_device.py:76: [92mFound[0m device: metal:0 |
|
[2024-01-29 22:29:36] INFO auto_device.py:85: [91mNot found[0m device: vulkan:0 |
|
[2024-01-29 22:29:36] INFO auto_device.py:85: [91mNot found[0m device: opencl:0 |
|
[2024-01-29 22:29:36] INFO auto_device.py:33: Using device: [1mmetal:0[0m |
|
[2024-01-29 22:29:36] INFO auto_weight.py:70: Finding weights in: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo |
|
[2024-01-29 22:29:36] INFO auto_weight.py:120: [92mFound[0m source weight format: huggingface-torch. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model.bin.index.json |
|
[2024-01-29 22:29:36] INFO auto_weight.py:143: [92mFound[0m source weight format: huggingface-safetensor. Source configuration: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/model.safetensors.index.json |
|
[2024-01-29 22:29:36] INFO auto_weight.py:106: Using source weight configuration: [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model.bin.index.json[0m. Use `--source` to override. |
|
[2024-01-29 22:29:36] INFO auto_weight.py:110: Using source weight format: [1mhuggingface-torch[0m. Use `--source-format` to override. |
|
[2024-01-29 22:29:36] INFO auto_config.py:153: [92mFound[0m model type: [1mllama[0m. Use `--model-type` to override. |
|
[2024-01-29 22:29:36] INFO llama_model.py:51: [1mcontext_window_size[0m not found in config.json. Falling back to [1mmax_position_embeddings[0m (16384) |
|
[2024-01-29 22:29:36] INFO llama_model.py:71: [1mprefill_chunk_size[0m defaults to [1mcontext_window_size[0m (16384) |
|
[1mWeight conversion with arguments:[0m |
|
[1m--config[0m /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/config.json |
|
[1m--quantization[0m GroupQuantize(name='q4f32_1', kind='group-quant', group_size=32, quantize_dtype='int4', storage_dtype='uint32', model_dtype='float32', linear_weight_layout='NK', num_elem_per_storage=8, num_storage_per_group=4, max_int_value=7) |
|
[1m--model-type[0m llama |
|
[1m--device[0m metal:0 |
|
[1m--source[0m /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model.bin.index.json |
|
[1m--source-format[0m huggingface-torch |
|
[1m--output[0m /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2 |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-01-29 22:29:40] INFO huggingface_loader.py:169: Loading HF parameters from: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00003-of-00003.bin |
|
0%| | 0/195 [00:00<?, ?it/s]
[2024-01-29 22:29:42] INFO group_quantization.py:227: Compiling quantize function for key: ((32016, 4096), float32, metal, axis=1, output_transpose=False) |
|
0%| | 0/195 [00:02<?, ?it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mlm_head.q_weight[0m", shape: (32016, 512), dtype: uint32 |
|
0%| | 0/195 [00:02<?, ?it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mlm_head.q_scale[0m", shape: (32016, 128), dtype: float32 |
|
0%| | 0/195 [00:02<?, ?it/s]
1%|β | 1/195 [00:02<09:37, 2.98s/it]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.23.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
1%|β | 1/195 [00:02<09:37, 2.98s/it]
[2024-01-29 22:29:43] INFO group_quantization.py:227: Compiling quantize function for key: ((4096, 11008), float32, metal, axis=1, output_transpose=False) |
|
1%|β | 1/195 [00:03<09:37, 2.98s/it]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
1%|β | 1/195 [00:03<09:37, 2.98s/it]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
1%|β | 1/195 [00:03<09:37, 2.98s/it]
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.23.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.24.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO group_quantization.py:227: Compiling quantize function for key: ((22016, 4096), float32, metal, axis=1, output_transpose=False) |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
2%|ββ | 3/195 [00:03<02:35, 1.23it/s]
4%|ββββ | 7/195 [00:03<00:55, 3.41it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.24.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
4%|ββββ | 7/195 [00:03<00:55, 3.41it/s]
[2024-01-29 22:29:43] INFO group_quantization.py:227: Compiling quantize function for key: ((12288, 4096), float32, metal, axis=1, output_transpose=False) |
|
4%|ββββ | 7/195 [00:03<00:55, 3.41it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
4%|ββββ | 7/195 [00:03<00:55, 3.41it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
4%|ββββ | 7/195 [00:03<00:55, 3.41it/s]
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO group_quantization.py:227: Compiling quantize function for key: ((4096, 4096), float32, metal, axis=1, output_transpose=False) |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.24.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.25.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
5%|βββββ | 9/195 [00:03<00:40, 4.62it/s]
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.25.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.25.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.26.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
7%|ββββββββ | 13/195 [00:03<00:24, 7.34it/s]
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
[2024-01-29 22:29:43] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.26.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
9%|ββββββββββ | 18/195 [00:03<00:15, 11.72it/s]
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.26.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.27.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
11%|ββββββββββββ | 21/195 [00:03<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
11%|ββββββββββββ | 21/195 [00:04<00:13, 13.01it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
11%|ββββββββββββ | 21/195 [00:04<00:13, 13.01it/s]
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.27.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.27.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.28.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
13%|ββββββββββββββ | 25/195 [00:04<00:10, 15.95it/s]
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.28.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
15%|βββββββββββββββββ | 30/195 [00:04<00:07, 20.76it/s]
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.28.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.29.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
17%|ββββββββββββββββββ | 33/195 [00:04<00:08, 19.86it/s]
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.29.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.29.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.30.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
19%|βββββββββββββββββββββ | 37/195 [00:04<00:07, 21.34it/s]
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.30.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
22%|βββββββββββββββββββββββ | 42/195 [00:04<00:06, 25.18it/s]
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.30.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.31.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:44] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
23%|βββββββββββββββββββββββββ | 45/195 [00:04<00:06, 22.55it/s]
25%|βββββββββββββββββββββββββββ | 49/195 [00:04<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.31.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:04<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:05<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:05<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:05<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.31.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:05<00:06, 23.50it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.norm.weight[0m", shape: (4096,), dtype: float32 |
|
25%|βββββββββββββββββββββββββββ | 49/195 [00:05<00:06, 23.50it/s]
27%|βββββββββββββββββββββββββββββ | 53/195 [00:05<00:05, 26.85it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:181: Unloading HF weight file: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00003-of-00003.bin |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:05<00:05, 26.85it/s]
[2024-01-29 22:29:45] INFO huggingface_loader.py:169: Loading HF parameters from: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00001-of-00003.bin |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:05<00:05, 26.85it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.embed_tokens.q_weight[0m", shape: (32016, 512), dtype: uint32 |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:07<00:05, 26.85it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.embed_tokens.q_scale[0m", shape: (32016, 128), dtype: float32 |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:07<00:05, 26.85it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.0.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:07<00:05, 26.85it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:07<00:05, 26.85it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
27%|βββββββββββββββββββββββββββββ | 53/195 [00:07<00:05, 26.85it/s]
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.0.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
29%|βββββββββββββββββββββββββββββββ | 56/195 [00:07<00:27, 5.03it/s]
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.0.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.1.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
30%|βββββββββββββββββββββββββββββββββ | 59/195 [00:07<00:22, 6.15it/s]
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.1.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.1.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.10.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
32%|βββββββββββββββββββββββββββββββββββ | 63/195 [00:07<00:15, 8.26it/s]
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.10.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
35%|ββββββββββββββββββββββββββββββββββββββ | 68/195 [00:07<00:10, 11.74it/s]
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:07<00:09, 12.73it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:07<00:09, 12.73it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.10.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:07<00:09, 12.73it/s]
[2024-01-29 22:29:47] INFO huggingface_loader.py:169: Loading HF parameters from: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00002-of-00003.bin |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:07<00:09, 12.73it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:10<00:09, 12.73it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:10<00:09, 12.73it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:10<00:09, 12.73it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
36%|βββββββββββββββββββββββββββββββββββββββ | 71/195 [00:10<00:09, 12.73it/s]
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:50] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.2.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:10<00:39, 3.08it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:11<00:39, 3.08it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
38%|βββββββββββββββββββββββββββββββββββββββββ | 74/195 [00:11<00:39, 3.08it/s]
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.2.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.2.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.3.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
40%|βββββββββββββββββββββββββββββββββββββββββββ | 78/195 [00:11<00:26, 4.33it/s]
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.3.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
43%|ββββββββββββββββββββββββββββββββββββββββββββββ | 83/195 [00:11<00:17, 6.46it/s]
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.3.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.4.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
44%|ββββββββββββββββββββββββββββββββββββββββββββββββ | 86/195 [00:11<00:14, 7.61it/s]
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.4.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.4.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.5.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
46%|ββββββββββββββββββββββββββββββββββββββββββββββββββ | 90/195 [00:11<00:10, 9.79it/s]
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.5.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
48%|ββββββββββββββββββββββββββββββββββββββββββββββββββββ | 94/195 [00:11<00:07, 12.75it/s]
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.5.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.6.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
[2024-01-29 22:29:51] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
50%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 97/195 [00:11<00:06, 14.07it/s]
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:11<00:05, 17.20it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:11<00:05, 17.20it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:11<00:05, 17.20it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.6.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:11<00:05, 17.20it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:12<00:05, 17.20it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
52%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 101/195 [00:12<00:05, 17.20it/s]
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.6.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.7.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
53%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 104/195 [00:12<00:05, 17.10it/s]
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.7.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.7.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.8.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
55%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 108/195 [00:12<00:04, 19.26it/s]
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.8.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
57%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 112/195 [00:12<00:03, 23.01it/s]
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.8.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.9.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
59%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 115/195 [00:12<00:03, 22.32it/s]
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.9.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
61%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 119/195 [00:12<00:03, 25.06it/s]
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.9.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.11.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.11.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.11.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.12.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:52] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
63%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 122/195 [00:12<00:03, 22.18it/s]
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:12<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.12.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:12<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:12<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:12<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:13<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.12.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:13<00:02, 27.71it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.13.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
66%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 129/195 [00:13<00:02, 27.71it/s]
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.13.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
68%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 133/195 [00:13<00:02, 30.14it/s]
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.13.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.14.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
70%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 137/195 [00:13<00:02, 25.69it/s]
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.14.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.14.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.15.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 141/195 [00:13<00:02, 25.69it/s]
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.15.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.15.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 146/195 [00:13<00:01, 28.83it/s]
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.16.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
77%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 150/195 [00:13<00:01, 26.07it/s]
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:53] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.16.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.16.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.17.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 153/195 [00:13<00:01, 24.81it/s]
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:13<00:01, 28.17it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:14<00:01, 28.17it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:14<00:01, 28.17it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.17.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:14<00:01, 28.17it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:14<00:01, 28.17it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
81%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 158/195 [00:14<00:01, 28.17it/s]
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.17.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.18.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
83%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 161/195 [00:14<00:01, 24.63it/s]
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.18.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.18.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.19.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
85%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 165/195 [00:14<00:01, 25.21it/s]
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.19.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
87%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 170/195 [00:14<00:00, 28.85it/s]
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.19.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.20.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
89%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 173/195 [00:14<00:00, 24.92it/s]
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.20.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.20.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
91%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 177/195 [00:14<00:00, 25.31it/s]
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
[2024-01-29 22:29:54] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.21.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
92%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 180/195 [00:14<00:00, 26.25it/s]
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:14<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.21.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:14<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:15<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:15<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:15<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.21.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:15<00:00, 24.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.22.input_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
94%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 183/195 [00:15<00:00, 24.89it/s]
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.mlp.down_proj.q_weight[0m", shape: (4096, 1376), dtype: uint32 |
|
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.mlp.down_proj.q_scale[0m", shape: (4096, 344), dtype: float32 |
|
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:129: [Not quantized] Parameter: "[1mmodel.layers.22.post_attention_layernorm.weight[0m", shape: (4096,), dtype: float32 |
|
96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 187/195 [00:15<00:00, 28.32it/s]
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.22.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.mlp.gate_up_proj.q_weight[0m", shape: (22016, 512), dtype: uint32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.mlp.gate_up_proj.q_scale[0m", shape: (22016, 128), dtype: float32 |
|
97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 190/195 [00:15<00:00, 25.89it/s]
99%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 193/195 [00:15<00:00, 22.21it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.self_attn.qkv_proj.q_weight[0m", shape: (12288, 512), dtype: uint32 |
|
99%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 193/195 [00:15<00:00, 22.21it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.self_attn.qkv_proj.q_scale[0m", shape: (12288, 128), dtype: float32 |
|
99%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 193/195 [00:15<00:00, 22.21it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.self_attn.o_proj.q_weight[0m", shape: (4096, 512), dtype: uint32 |
|
99%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 193/195 [00:15<00:00, 22.21it/s]
[2024-01-29 22:29:55] INFO huggingface_loader.py:121: [Quantized] Parameter: "[1mmodel.layers.23.self_attn.o_proj.q_scale[0m", shape: (4096, 128), dtype: float32 |
|
99%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 193/195 [00:15<00:00, 22.21it/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 195/195 [00:15<00:00, 12.58it/s] |
|
[2024-01-29 22:29:55] INFO huggingface_loader.py:181: Unloading HF weight file: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00001-of-00003.bin |
|
[2024-01-29 22:29:55] INFO huggingface_loader.py:181: Unloading HF weight file: /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpkodjm9df/repo/pytorch_model-00002-of-00003.bin |
|
[2024-01-29 22:29:55] INFO stats.py:76: [92mTime usage[0m: HF loading: 7.831 sec; Pre-quantization mapping: 3.599 sec; Quantization: 0.595 sec |
|
[2024-01-29 22:29:55] INFO stats.py:90: [92mRAM usage[0m: Peak RAM: 18.415 GB. Total bytes loaded from disk: 25.103 GB |
|
[2024-01-29 22:29:55] INFO convert_weight.py:121: [92mParameter size[0m after quantization: 3.923 GB |
|
[2024-01-29 22:29:55] INFO convert_weight.py:126: [92mTotal parameters[0m: 6,738,546,688 |
|
[2024-01-29 22:29:55] INFO convert_weight.py:127: [92mBits per parameter[0m: 5.001 |
|
Start storing to cache /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2 |
|
[0001/0325] saving lm_head.q_weight
[0002/0325] saving lm_head.q_scale
[0003/0325] saving model.layers.23.input_layernorm.weight
[0004/0325] saving model.layers.23.mlp.down_proj.q_weight
[0005/0325] saving model.layers.23.mlp.down_proj.q_scale
[0006/0325] saving model.layers.23.post_attention_layernorm.weight
[0007/0325] saving model.layers.24.input_layernorm.weight
[0008/0325] saving model.layers.24.mlp.down_proj.q_weight
[0009/0325] saving model.layers.24.mlp.down_proj.q_scale
[0010/0325] saving model.layers.24.mlp.gate_up_proj.q_weight
[0011/0325] saving model.layers.24.mlp.gate_up_proj.q_scale
[0012/0325] saving model.layers.24.post_attention_layernorm.weight
[0013/0325] saving model.layers.24.self_attn.qkv_proj.q_weight
[0014/0325] saving model.layers.24.self_attn.qkv_proj.q_scale
[0015/0325] saving model.layers.24.self_attn.o_proj.q_weight
[0016/0325] saving model.layers.24.self_attn.o_proj.q_scale
[0017/0325] saving model.layers.25.input_layernorm.weight
[0018/0325] saving model.layers.25.mlp.down_proj.q_weight
[0019/0325] saving model.layers.25.mlp.down_proj.q_scale
[0020/0325] saving model.layers.25.mlp.gate_up_proj.q_weight
[0021/0325] saving model.layers.25.mlp.gate_up_proj.q_scale
[0022/0325] saving model.layers.25.post_attention_layernorm.weight
[0023/0325] saving model.layers.25.self_attn.qkv_proj.q_weight
[0024/0325] saving model.layers.25.self_attn.qkv_proj.q_scale
[0025/0325] saving model.layers.25.self_attn.o_proj.q_weight
[0026/0325] saving model.layers.25.self_attn.o_proj.q_scale
[0027/0325] saving model.layers.26.input_layernorm.weight
[0028/0325] saving model.layers.26.mlp.down_proj.q_weight
[0029/0325] saving model.layers.26.mlp.down_proj.q_scale
[0030/0325] saving model.layers.26.mlp.gate_up_proj.q_weight
[0031/0325] saving model.layers.26.mlp.gate_up_proj.q_scale
[0032/0325] saving model.layers.26.post_attention_layernorm.weight
[0033/0325] saving model.layers.26.self_attn.qkv_proj.q_weight
[0034/0325] saving model.layers.26.self_attn.qkv_proj.q_scale
[0035/0325] saving model.layers.26.self_attn.o_proj.q_weight
[0036/0325] saving model.layers.26.self_attn.o_proj.q_scale
[0037/0325] saving model.layers.27.input_layernorm.weight
[0038/0325] saving model.layers.27.mlp.down_proj.q_weight
[0039/0325] saving model.layers.27.mlp.down_proj.q_scale
[0040/0325] saving model.layers.27.mlp.gate_up_proj.q_weight
[0041/0325] saving model.layers.27.mlp.gate_up_proj.q_scale
[0042/0325] saving model.layers.27.post_attention_layernorm.weight
[0043/0325] saving model.layers.27.self_attn.qkv_proj.q_weight
[0044/0325] saving model.layers.27.self_attn.qkv_proj.q_scale
[0045/0325] saving model.layers.27.self_attn.o_proj.q_weight
[0046/0325] saving model.layers.27.self_attn.o_proj.q_scale
[0047/0325] saving model.layers.28.input_layernorm.weight
[0048/0325] saving model.layers.28.mlp.down_proj.q_weight
[0049/0325] saving model.layers.28.mlp.down_proj.q_scale
[0050/0325] saving model.layers.28.mlp.gate_up_proj.q_weight
[0051/0325] saving model.layers.28.mlp.gate_up_proj.q_scale
[0052/0325] saving model.layers.28.post_attention_layernorm.weight
[0053/0325] saving model.layers.28.self_attn.qkv_proj.q_weight
[0054/0325] saving model.layers.28.self_attn.qkv_proj.q_scale
[0055/0325] saving model.layers.28.self_attn.o_proj.q_weight
[0056/0325] saving model.layers.28.self_attn.o_proj.q_scale
[0057/0325] saving model.layers.29.input_layernorm.weight
[0058/0325] saving model.layers.29.mlp.down_proj.q_weight
[0059/0325] saving model.layers.29.mlp.down_proj.q_scale
[0060/0325] saving model.layers.29.mlp.gate_up_proj.q_weight
[0061/0325] saving model.layers.29.mlp.gate_up_proj.q_scale
[0062/0325] saving model.layers.29.post_attention_layernorm.weight
[0063/0325] saving model.layers.29.self_attn.qkv_proj.q_weight
[0064/0325] saving model.layers.29.self_attn.qkv_proj.q_scale
[0065/0325] saving model.layers.29.self_attn.o_proj.q_weight
[0066/0325] saving model.layers.29.self_attn.o_proj.q_scale
[0067/0325] saving model.layers.30.input_layernorm.weight
[0068/0325] saving model.layers.30.mlp.down_proj.q_weight
[0069/0325] saving model.layers.30.mlp.down_proj.q_scale
[0070/0325] saving model.layers.30.mlp.gate_up_proj.q_weight
[0071/0325] saving model.layers.30.mlp.gate_up_proj.q_scale
[0072/0325] saving model.layers.30.post_attention_layernorm.weight
[0073/0325] saving model.layers.30.self_attn.qkv_proj.q_weight
[0074/0325] saving model.layers.30.self_attn.qkv_proj.q_scale
[0075/0325] saving model.layers.30.self_attn.o_proj.q_weight
[0076/0325] saving model.layers.30.self_attn.o_proj.q_scale
[0077/0325] saving model.layers.31.input_layernorm.weight
[0078/0325] saving model.layers.31.mlp.down_proj.q_weight
[0079/0325] saving model.layers.31.mlp.down_proj.q_scale
[0080/0325] saving model.layers.31.mlp.gate_up_proj.q_weight
[0081/0325] saving model.layers.31.mlp.gate_up_proj.q_scale
[0082/0325] saving model.layers.31.post_attention_layernorm.weight
[0083/0325] saving model.layers.31.self_attn.qkv_proj.q_weight
[0084/0325] saving model.layers.31.self_attn.qkv_proj.q_scale
[0085/0325] saving model.layers.31.self_attn.o_proj.q_weight
[0086/0325] saving model.layers.31.self_attn.o_proj.q_scale
[0087/0325] saving model.norm.weight
[0088/0325] saving model.embed_tokens.q_weight
[0089/0325] saving model.embed_tokens.q_scale
[0090/0325] saving model.layers.0.input_layernorm.weight
[0091/0325] saving model.layers.0.mlp.down_proj.q_weight
[0092/0325] saving model.layers.0.mlp.down_proj.q_scale
[0093/0325] saving model.layers.0.mlp.gate_up_proj.q_weight
[0094/0325] saving model.layers.0.mlp.gate_up_proj.q_scale
[0095/0325] saving model.layers.0.post_attention_layernorm.weight
[0096/0325] saving model.layers.0.self_attn.qkv_proj.q_weight
[0097/0325] saving model.layers.0.self_attn.qkv_proj.q_scale
[0098/0325] saving model.layers.0.self_attn.o_proj.q_weight
[0099/0325] saving model.layers.0.self_attn.o_proj.q_scale
[0100/0325] saving model.layers.1.input_layernorm.weight
[0101/0325] saving model.layers.1.mlp.down_proj.q_weight
[0102/0325] saving model.layers.1.mlp.down_proj.q_scale
[0103/0325] saving model.layers.1.mlp.gate_up_proj.q_weight
[0104/0325] saving model.layers.1.mlp.gate_up_proj.q_scale
[0105/0325] saving model.layers.1.post_attention_layernorm.weight
[0106/0325] saving model.layers.1.self_attn.qkv_proj.q_weight
[0107/0325] saving model.layers.1.self_attn.qkv_proj.q_scale
[0108/0325] saving model.layers.1.self_attn.o_proj.q_weight
[0109/0325] saving model.layers.1.self_attn.o_proj.q_scale
[0110/0325] saving model.layers.10.input_layernorm.weight
[0111/0325] saving model.layers.10.mlp.down_proj.q_weight
[0112/0325] saving model.layers.10.mlp.down_proj.q_scale
[0113/0325] saving model.layers.10.mlp.gate_up_proj.q_weight
[0114/0325] saving model.layers.10.mlp.gate_up_proj.q_scale
[0115/0325] saving model.layers.10.post_attention_layernorm.weight
[0116/0325] saving model.layers.10.self_attn.qkv_proj.q_weight
[0117/0325] saving model.layers.10.self_attn.qkv_proj.q_scale
[0118/0325] saving model.layers.10.self_attn.o_proj.q_weight
[0119/0325] saving model.layers.10.self_attn.o_proj.q_scale
[0120/0325] saving model.layers.11.mlp.gate_up_proj.q_weight
[0121/0325] saving model.layers.11.mlp.gate_up_proj.q_scale
[0122/0325] saving model.layers.11.self_attn.qkv_proj.q_weight
[0123/0325] saving model.layers.11.self_attn.qkv_proj.q_scale
[0124/0325] saving model.layers.11.self_attn.o_proj.q_weight
[0125/0325] saving model.layers.11.self_attn.o_proj.q_scale
[0126/0325] saving model.layers.2.input_layernorm.weight
[0127/0325] saving model.layers.2.mlp.down_proj.q_weight
[0128/0325] saving model.layers.2.mlp.down_proj.q_scale
[0129/0325] saving model.layers.2.mlp.gate_up_proj.q_weight
[0130/0325] saving model.layers.2.mlp.gate_up_proj.q_scale
[0131/0325] saving model.layers.2.post_attention_layernorm.weight
[0132/0325] saving model.layers.2.self_attn.qkv_proj.q_weight
[0133/0325] saving model.layers.2.self_attn.qkv_proj.q_scale
[0134/0325] saving model.layers.2.self_attn.o_proj.q_weight
[0135/0325] saving model.layers.2.self_attn.o_proj.q_scale
[0136/0325] saving model.layers.3.input_layernorm.weight
[0137/0325] saving model.layers.3.mlp.down_proj.q_weight
[0138/0325] saving model.layers.3.mlp.down_proj.q_scale
[0139/0325] saving model.layers.3.mlp.gate_up_proj.q_weight
[0140/0325] saving model.layers.3.mlp.gate_up_proj.q_scale
[0141/0325] saving model.layers.3.post_attention_layernorm.weight
[0142/0325] saving model.layers.3.self_attn.qkv_proj.q_weight
[0143/0325] saving model.layers.3.self_attn.qkv_proj.q_scale
[0144/0325] saving model.layers.3.self_attn.o_proj.q_weight
[0145/0325] saving model.layers.3.self_attn.o_proj.q_scale
[0146/0325] saving model.layers.4.input_layernorm.weight
[0147/0325] saving model.layers.4.mlp.down_proj.q_weight
[0148/0325] saving model.layers.4.mlp.down_proj.q_scale
[0149/0325] saving model.layers.4.mlp.gate_up_proj.q_weight
[0150/0325] saving model.layers.4.mlp.gate_up_proj.q_scale
[0151/0325] saving model.layers.4.post_attention_layernorm.weight
[0152/0325] saving model.layers.4.self_attn.qkv_proj.q_weight
[0153/0325] saving model.layers.4.self_attn.qkv_proj.q_scale
[0154/0325] saving model.layers.4.self_attn.o_proj.q_weight
[0155/0325] saving model.layers.4.self_attn.o_proj.q_scale
[0156/0325] saving model.layers.5.input_layernorm.weight
[0157/0325] saving model.layers.5.mlp.down_proj.q_weight
[0158/0325] saving model.layers.5.mlp.down_proj.q_scale
[0159/0325] saving model.layers.5.mlp.gate_up_proj.q_weight
[0160/0325] saving model.layers.5.mlp.gate_up_proj.q_scale
[0161/0325] saving model.layers.5.post_attention_layernorm.weight
[0162/0325] saving model.layers.5.self_attn.qkv_proj.q_weight
[0163/0325] saving model.layers.5.self_attn.qkv_proj.q_scale
[0164/0325] saving model.layers.5.self_attn.o_proj.q_weight
[0165/0325] saving model.layers.5.self_attn.o_proj.q_scale
[0166/0325] saving model.layers.6.input_layernorm.weight
[0167/0325] saving model.layers.6.mlp.down_proj.q_weight
[0168/0325] saving model.layers.6.mlp.down_proj.q_scale
[0169/0325] saving model.layers.6.mlp.gate_up_proj.q_weight
[0170/0325] saving model.layers.6.mlp.gate_up_proj.q_scale
[0171/0325] saving model.layers.6.post_attention_layernorm.weight
[0172/0325] saving model.layers.6.self_attn.qkv_proj.q_weight
[0173/0325] saving model.layers.6.self_attn.qkv_proj.q_scale
[0174/0325] saving model.layers.6.self_attn.o_proj.q_weight
[0175/0325] saving model.layers.6.self_attn.o_proj.q_scale
[0176/0325] saving model.layers.7.input_layernorm.weight
[0177/0325] saving model.layers.7.mlp.down_proj.q_weight
[0178/0325] saving model.layers.7.mlp.down_proj.q_scale
[0179/0325] saving model.layers.7.mlp.gate_up_proj.q_weight
[0180/0325] saving model.layers.7.mlp.gate_up_proj.q_scale
[0181/0325] saving model.layers.7.post_attention_layernorm.weight
[0182/0325] saving model.layers.7.self_attn.qkv_proj.q_weight
[0183/0325] saving model.layers.7.self_attn.qkv_proj.q_scale
[0184/0325] saving model.layers.7.self_attn.o_proj.q_weight
[0185/0325] saving model.layers.7.self_attn.o_proj.q_scale
[0186/0325] saving model.layers.8.input_layernorm.weight
[0187/0325] saving model.layers.8.mlp.down_proj.q_weight
[0188/0325] saving model.layers.8.mlp.down_proj.q_scale
[0189/0325] saving model.layers.8.mlp.gate_up_proj.q_weight
[0190/0325] saving model.layers.8.mlp.gate_up_proj.q_scale
[0191/0325] saving model.layers.8.post_attention_layernorm.weight
[0192/0325] saving model.layers.8.self_attn.qkv_proj.q_weight
[0193/0325] saving model.layers.8.self_attn.qkv_proj.q_scale
[0194/0325] saving model.layers.8.self_attn.o_proj.q_weight
[0195/0325] saving model.layers.8.self_attn.o_proj.q_scale
[0196/0325] saving model.layers.9.input_layernorm.weight
[0197/0325] saving model.layers.9.mlp.down_proj.q_weight
[0198/0325] saving model.layers.9.mlp.down_proj.q_scale
[0199/0325] saving model.layers.9.mlp.gate_up_proj.q_weight
[0200/0325] saving model.layers.9.mlp.gate_up_proj.q_scale
[0201/0325] saving model.layers.9.post_attention_layernorm.weight
[0202/0325] saving model.layers.9.self_attn.qkv_proj.q_weight
[0203/0325] saving model.layers.9.self_attn.qkv_proj.q_scale
[0204/0325] saving model.layers.9.self_attn.o_proj.q_weight
[0205/0325] saving model.layers.9.self_attn.o_proj.q_scale
[0206/0325] saving model.layers.11.input_layernorm.weight
[0207/0325] saving model.layers.11.mlp.down_proj.q_weight
[0208/0325] saving model.layers.11.mlp.down_proj.q_scale
[0209/0325] saving model.layers.11.post_attention_layernorm.weight
[0210/0325] saving model.layers.12.input_layernorm.weight
[0211/0325] saving model.layers.12.mlp.down_proj.q_weight
[0212/0325] saving model.layers.12.mlp.down_proj.q_scale
[0213/0325] saving model.layers.12.mlp.gate_up_proj.q_weight
[0214/0325] saving model.layers.12.mlp.gate_up_proj.q_scale
[0215/0325] saving model.layers.12.post_attention_layernorm.weight
[0216/0325] saving model.layers.12.self_attn.qkv_proj.q_weight
[0217/0325] saving model.layers.12.self_attn.qkv_proj.q_scale
[0218/0325] saving model.layers.12.self_attn.o_proj.q_weight
[0219/0325] saving model.layers.12.self_attn.o_proj.q_scale
[0220/0325] saving model.layers.13.input_layernorm.weight
[0221/0325] saving model.layers.13.mlp.down_proj.q_weight
[0222/0325] saving model.layers.13.mlp.down_proj.q_scale
[0223/0325] saving model.layers.13.mlp.gate_up_proj.q_weight
[0224/0325] saving model.layers.13.mlp.gate_up_proj.q_scale
[0225/0325] saving model.layers.13.post_attention_layernorm.weight
[0226/0325] saving model.layers.13.self_attn.qkv_proj.q_weight
[0227/0325] saving model.layers.13.self_attn.qkv_proj.q_scale
[0228/0325] saving model.layers.13.self_attn.o_proj.q_weight
[0229/0325] saving model.layers.13.self_attn.o_proj.q_scale
[0230/0325] saving model.layers.14.input_layernorm.weight
[0231/0325] saving model.layers.14.mlp.down_proj.q_weight
[0232/0325] saving model.layers.14.mlp.down_proj.q_scale
[0233/0325] saving model.layers.14.mlp.gate_up_proj.q_weight
[0234/0325] saving model.layers.14.mlp.gate_up_proj.q_scale
[0235/0325] saving model.layers.14.post_attention_layernorm.weight
[0236/0325] saving model.layers.14.self_attn.qkv_proj.q_weight
[0237/0325] saving model.layers.14.self_attn.qkv_proj.q_scale
[0238/0325] saving model.layers.14.self_attn.o_proj.q_weight
[0239/0325] saving model.layers.14.self_attn.o_proj.q_scale
[0240/0325] saving model.layers.15.input_layernorm.weight
[0241/0325] saving model.layers.15.mlp.down_proj.q_weight
[0242/0325] saving model.layers.15.mlp.down_proj.q_scale
[0243/0325] saving model.layers.15.mlp.gate_up_proj.q_weight
[0244/0325] saving model.layers.15.mlp.gate_up_proj.q_scale
[0245/0325] saving model.layers.15.post_attention_layernorm.weight
[0246/0325] saving model.layers.15.self_attn.qkv_proj.q_weight
[0247/0325] saving model.layers.15.self_attn.qkv_proj.q_scale
[0248/0325] saving model.layers.15.self_attn.o_proj.q_weight
[0249/0325] saving model.layers.15.self_attn.o_proj.q_scale
[0250/0325] saving model.layers.16.input_layernorm.weight
[0251/0325] saving model.layers.16.mlp.down_proj.q_weight
[0252/0325] saving model.layers.16.mlp.down_proj.q_scale
[0253/0325] saving model.layers.16.mlp.gate_up_proj.q_weight
[0254/0325] saving model.layers.16.mlp.gate_up_proj.q_scale
[0255/0325] saving model.layers.16.post_attention_layernorm.weight
[0256/0325] saving model.layers.16.self_attn.qkv_proj.q_weight
[0257/0325] saving model.layers.16.self_attn.qkv_proj.q_scale
[0258/0325] saving model.layers.16.self_attn.o_proj.q_weight
[0259/0325] saving model.layers.16.self_attn.o_proj.q_scale
[0260/0325] saving model.layers.17.input_layernorm.weight
[0261/0325] saving model.layers.17.mlp.down_proj.q_weight
[0262/0325] saving model.layers.17.mlp.down_proj.q_scale
[0263/0325] saving model.layers.17.mlp.gate_up_proj.q_weight
[0264/0325] saving model.layers.17.mlp.gate_up_proj.q_scale
[0265/0325] saving model.layers.17.post_attention_layernorm.weight
[0266/0325] saving model.layers.17.self_attn.qkv_proj.q_weight
[0267/0325] saving model.layers.17.self_attn.qkv_proj.q_scale
[0268/0325] saving model.layers.17.self_attn.o_proj.q_weight
[0269/0325] saving model.layers.17.self_attn.o_proj.q_scale
[0270/0325] saving model.layers.18.input_layernorm.weight
[0271/0325] saving model.layers.18.mlp.down_proj.q_weight
[0272/0325] saving model.layers.18.mlp.down_proj.q_scale
[0273/0325] saving model.layers.18.mlp.gate_up_proj.q_weight
[0274/0325] saving model.layers.18.mlp.gate_up_proj.q_scale
[0275/0325] saving model.layers.18.post_attention_layernorm.weight
[0276/0325] saving model.layers.18.self_attn.qkv_proj.q_weight
[0277/0325] saving model.layers.18.self_attn.qkv_proj.q_scale
[0278/0325] saving model.layers.18.self_attn.o_proj.q_weight
[0279/0325] saving model.layers.18.self_attn.o_proj.q_scale
[0280/0325] saving model.layers.19.input_layernorm.weight
[0281/0325] saving model.layers.19.mlp.down_proj.q_weight
[0282/0325] saving model.layers.19.mlp.down_proj.q_scale
[0283/0325] saving model.layers.19.mlp.gate_up_proj.q_weight
[0284/0325] saving model.layers.19.mlp.gate_up_proj.q_scale
[0285/0325] saving model.layers.19.post_attention_layernorm.weight
[0286/0325] saving model.layers.19.self_attn.qkv_proj.q_weight
[0287/0325] saving model.layers.19.self_attn.qkv_proj.q_scale
[0288/0325] saving model.layers.19.self_attn.o_proj.q_weight
[0289/0325] saving model.layers.19.self_attn.o_proj.q_scale
[0290/0325] saving model.layers.20.input_layernorm.weight
[0291/0325] saving model.layers.20.mlp.down_proj.q_weight
[0292/0325] saving model.layers.20.mlp.down_proj.q_scale
[0293/0325] saving model.layers.20.mlp.gate_up_proj.q_weight
[0294/0325] saving model.layers.20.mlp.gate_up_proj.q_scale
[0295/0325] saving model.layers.20.post_attention_layernorm.weight
[0296/0325] saving model.layers.20.self_attn.qkv_proj.q_weight
[0297/0325] saving model.layers.20.self_attn.qkv_proj.q_scale
[0298/0325] saving model.layers.20.self_attn.o_proj.q_weight
[0299/0325] saving model.layers.20.self_attn.o_proj.q_scale
[0300/0325] saving model.layers.21.input_layernorm.weight
[0301/0325] saving model.layers.21.mlp.down_proj.q_weight
[0302/0325] saving model.layers.21.mlp.down_proj.q_scale
[0303/0325] saving model.layers.21.mlp.gate_up_proj.q_weight
[0304/0325] saving model.layers.21.mlp.gate_up_proj.q_scale
[0305/0325] saving model.layers.21.post_attention_layernorm.weight
[0306/0325] saving model.layers.21.self_attn.qkv_proj.q_weight
[0307/0325] saving model.layers.21.self_attn.qkv_proj.q_scale
[0308/0325] saving model.layers.21.self_attn.o_proj.q_weight
[0309/0325] saving model.layers.21.self_attn.o_proj.q_scale
[0310/0325] saving model.layers.22.input_layernorm.weight
[0311/0325] saving model.layers.22.mlp.down_proj.q_weight
[0312/0325] saving model.layers.22.mlp.down_proj.q_scale
[0313/0325] saving model.layers.22.mlp.gate_up_proj.q_weight
[0314/0325] saving model.layers.22.mlp.gate_up_proj.q_scale
[0315/0325] saving model.layers.22.post_attention_layernorm.weight
[0316/0325] saving model.layers.22.self_attn.qkv_proj.q_weight
[0317/0325] saving model.layers.22.self_attn.qkv_proj.q_scale
[0318/0325] saving model.layers.22.self_attn.o_proj.q_weight
[0319/0325] saving model.layers.22.self_attn.o_proj.q_scale
[0320/0325] saving model.layers.23.mlp.gate_up_proj.q_weight
[0321/0325] saving model.layers.23.mlp.gate_up_proj.q_scale[2024-01-29 22:30:03] INFO convert_weight.py:143: Saved to directory: [1m/var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2[0m |
|
[0322/0325] saving model.layers.23.self_attn.qkv_proj.q_weight
[0323/0325] saving model.layers.23.self_attn.qkv_proj.q_scale
[0324/0325] saving model.layers.23.self_attn.o_proj.q_weight
[0325/0325] saving model.layers.23.self_attn.o_proj.q_scale |
|
All finished, 114 total shards committed, record saved to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/ndarray-cache.json |
|
Also saved a bf16 record to /var/folders/50/mzqbqxqj5fddcby2mg3h334c0000gp/T/tmpicoo1ta2/ndarray-cache-b16.json |
|
|