Charlie Ruan
Add weights
fc96d44
raw
history blame
107 kB
/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_llm gen_config /Users/Shared/models/Qwen2-Math-72B-Instruct --quantization q0f16 --conv-template chatml --output local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC
[2024-08-08 16:29:55] INFO auto_config.py:116: Found model configuration: /Users/Shared/models/Qwen2-Math-72B-Instruct/config.json
[2024-08-08 16:29:55] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override.
[2024-08-08 16:29:55] INFO qwen2_model.py:50: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-08-08 16:29:55] INFO qwen2_model.py:67: prefill_chunk_size defaults to 2048
[2024-08-08 16:29:55] INFO config.py:107: Overriding max_batch_size from 1 to 80
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting bos_token_id: 151643
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting pad_token_id: 151643
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting eos_token_id: [151645, 151643]
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting repetition_penalty: 1.05
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting temperature: 0.7
[2024-08-08 16:29:55] INFO gen_config.py:147: [generation_config.json] Setting top_p: 0.8
[2024-08-08 16:29:55] INFO gen_config.py:161: Not found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/tokenizer.model
[2024-08-08 16:29:55] INFO gen_config.py:159: Found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/tokenizer.json. Copying to local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC/tokenizer.json
[2024-08-08 16:29:55] INFO gen_config.py:159: Found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/vocab.json. Copying to local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC/vocab.json
[2024-08-08 16:29:55] INFO gen_config.py:159: Found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/merges.txt. Copying to local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC/merges.txt
[2024-08-08 16:29:55] INFO gen_config.py:161: Not found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/added_tokens.json
[2024-08-08 16:29:55] INFO gen_config.py:159: Found tokenizer config: /Users/Shared/models/Qwen2-Math-72B-Instruct/tokenizer_config.json. Copying to local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC/tokenizer_config.json
[2024-08-08 16:29:55] INFO gen_config.py:220: Detected tokenizer info: {'token_postproc_method': 'byte_level', 'prepend_space_in_encode': False, 'strip_space_in_decode': False}
[2024-08-08 16:29:55] INFO gen_config.py:32: [System default] Setting presence_penalty: 0.0
[2024-08-08 16:29:55] INFO gen_config.py:32: [System default] Setting frequency_penalty: 0.0
[2024-08-08 16:29:55] INFO gen_config.py:248: Dumping configuration file to: local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC/mlc-chat-config.json
/Users/cfruan/miniconda3/envs/mlc-chat-venv/bin/python -m mlc_llm convert_weight /Users/Shared/models/Qwen2-Math-72B-Instruct --quantization q0f16 --output local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC
[2024-08-08 16:29:56] INFO auto_config.py:116: Found model configuration: /Users/Shared/models/Qwen2-Math-72B-Instruct/config.json
[2024-08-08 16:29:56] INFO auto_device.py:88: Not found device: cuda:0
[2024-08-08 16:29:57] INFO auto_device.py:88: Not found device: rocm:0
[2024-08-08 16:29:58] INFO auto_device.py:79: Found device: metal:0
[2024-08-08 16:29:59] INFO auto_device.py:88: Not found device: vulkan:0
[2024-08-08 16:29:59] INFO auto_device.py:88: Not found device: opencl:0
[2024-08-08 16:29:59] INFO auto_device.py:35: Using device: metal:0
[2024-08-08 16:29:59] INFO auto_weight.py:71: Finding weights in: /Users/Shared/models/Qwen2-Math-72B-Instruct
[2024-08-08 16:29:59] INFO auto_weight.py:137: Not found Huggingface PyTorch
[2024-08-08 16:29:59] INFO auto_weight.py:144: Found source weight format: huggingface-safetensor. Source configuration: /Users/Shared/models/Qwen2-Math-72B-Instruct/model.safetensors.index.json
[2024-08-08 16:29:59] INFO auto_weight.py:107: Using source weight configuration: /Users/Shared/models/Qwen2-Math-72B-Instruct/model.safetensors.index.json. Use `--source` to override.
[2024-08-08 16:29:59] INFO auto_weight.py:111: Using source weight format: huggingface-safetensor. Use `--source-format` to override.
[2024-08-08 16:29:59] INFO auto_config.py:154: Found model type: qwen2. Use `--model-type` to override.
[2024-08-08 16:29:59] INFO qwen2_model.py:50: context_window_size not found in config.json. Falling back to max_position_embeddings (4096)
[2024-08-08 16:29:59] INFO qwen2_model.py:67: prefill_chunk_size defaults to 2048
Weight conversion with arguments:
--config /Users/Shared/models/Qwen2-Math-72B-Instruct/config.json
--quantization NoQuantize(name='q0f16', kind='no-quant', model_dtype='float16')
--model-type qwen2
--device metal:0
--source /Users/Shared/models/Qwen2-Math-72B-Instruct/model.safetensors.index.json
--source-format huggingface-safetensor
--output local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC
Start storing to cache local_dir/Qwen2-Math-72B-Instruct-q0f16-MLC
0%| | 0/563 [00:00<?, ?it/s] [2024-08-08 16:30:02] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00037-of-00037.safetensors
0%| | 0/563 [00:00<?, ?it/s] [2024-08-08 16:30:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "lm_head.weight", shape: (152064, 8192), dtype: float16
0%| | 0/563 [00:05<?, ?it/s] 0%| | 1/563 [00:12<1:52:28, 12.01s/it] [2024-08-08 16:30:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.79.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
0%| | 1/563 [00:12<1:52:28, 12.01s/it] 0%| | 2/563 [00:14<57:22, 6.14s/it] [2024-08-08 16:30:16] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00036-of-00037.safetensors
0%| | 2/563 [00:14<57:22, 6.14s/it] [2024-08-08 16:30:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.79.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
0%| | 2/563 [00:16<57:22, 6.14s/it] 1%| | 3/563 [00:19<52:55, 5.67s/it] [2024-08-08 16:30:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.norm.weight", shape: (8192,), dtype: float16
1%| | 3/563 [00:19<52:55, 5.67s/it] 1%| | 4/563 [00:19<32:25, 3.48s/it] [2024-08-08 16:30:21] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00036-of-00037.safetensors
1%| | 4/563 [00:19<32:25, 3.48s/it] [2024-08-08 16:30:21] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00037-of-00037.safetensors
1%| | 4/563 [00:19<32:25, 3.48s/it] [2024-08-08 16:30:22] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00001-of-00037.safetensors
1%| | 4/563 [00:19<32:25, 3.48s/it] [2024-08-08 16:30:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.embed_tokens.weight", shape: (152064, 8192), dtype: float16
1%| | 4/563 [00:24<32:25, 3.48s/it] 1%| | 5/563 [00:30<56:53, 6.12s/it] [2024-08-08 16:30:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.input_layernorm.weight", shape: (8192,), dtype: float16
1%| | 5/563 [00:30<56:53, 6.12s/it] 1%| | 6/563 [00:30<38:13, 4.12s/it] [2024-08-08 16:30:34] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
1%| | 6/563 [00:31<38:13, 4.12s/it] 1%| | 7/563 [00:34<37:23, 4.04s/it] [2024-08-08 16:30:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.post_attention_layernorm.weight", shape: (8192,), dtype: float16
1%| | 7/563 [00:34<37:23, 4.04s/it] 1%|▏ | 8/563 [00:34<25:47, 2.79s/it] [2024-08-08 16:30:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.self_attn.c_attn.bias", shape: (10240,), dtype: float16
1%|▏ | 8/563 [00:34<25:47, 2.79s/it] [2024-08-08 16:30:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
1%|▏ | 8/563 [00:34<25:47, 2.79s/it] 2%|▏ | 10/563 [00:34<14:46, 1.60s/it] [2024-08-08 16:30:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
2%|▏ | 10/563 [00:35<14:46, 1.60s/it] 2%|▏ | 11/563 [00:35<12:04, 1.31s/it] [2024-08-08 16:30:37] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00001-of-00037.safetensors
2%|▏ | 11/563 [00:35<12:04, 1.31s/it] [2024-08-08 16:30:37] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00002-of-00037.safetensors
2%|▏ | 11/563 [00:35<12:04, 1.31s/it] [2024-08-08 16:30:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.0.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
2%|▏ | 11/563 [00:37<12:04, 1.31s/it] 2%|▏ | 12/563 [00:38<17:15, 1.88s/it] [2024-08-08 16:30:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.input_layernorm.weight", shape: (8192,), dtype: float16
2%|▏ | 12/563 [00:38<17:15, 1.88s/it] [2024-08-08 16:30:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
2%|▏ | 12/563 [00:39<17:15, 1.88s/it] 2%|▏ | 14/563 [00:40<13:07, 1.43s/it] [2024-08-08 16:30:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
2%|▏ | 14/563 [00:42<13:07, 1.43s/it] 3%|β–Ž | 15/563 [00:44<18:24, 2.02s/it] [2024-08-08 16:30:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.post_attention_layernorm.weight", shape: (8192,), dtype: float16
3%|β–Ž | 15/563 [00:44<18:24, 2.02s/it] 3%|β–Ž | 16/563 [00:44<14:01, 1.54s/it] [2024-08-08 16:30:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.self_attn.c_attn.bias", shape: (10240,), dtype: float16
3%|β–Ž | 16/563 [00:44<14:01, 1.54s/it] [2024-08-08 16:30:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
3%|β–Ž | 16/563 [00:44<14:01, 1.54s/it] 3%|β–Ž | 18/563 [00:45<09:20, 1.03s/it] [2024-08-08 16:30:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.1.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
3%|β–Ž | 18/563 [00:45<09:20, 1.03s/it] 3%|β–Ž | 19/563 [00:45<08:03, 1.12it/s] [2024-08-08 16:30:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.input_layernorm.weight", shape: (8192,), dtype: float16
3%|β–Ž | 19/563 [00:45<08:03, 1.12it/s] [2024-08-08 16:30:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
3%|β–Ž | 19/563 [00:46<08:03, 1.12it/s] 4%|β–Ž | 21/563 [00:47<07:49, 1.15it/s] [2024-08-08 16:30:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
4%|β–Ž | 21/563 [00:48<07:49, 1.15it/s] 4%|▍ | 22/563 [00:51<13:44, 1.52s/it] [2024-08-08 16:30:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.post_attention_layernorm.weight", shape: (8192,), dtype: float16
4%|▍ | 22/563 [00:51<13:44, 1.52s/it] 4%|▍ | 23/563 [00:51<10:40, 1.19s/it] [2024-08-08 16:30:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.self_attn.c_attn.bias", shape: (10240,), dtype: float16
4%|▍ | 23/563 [00:51<10:40, 1.19s/it] [2024-08-08 16:30:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
4%|▍ | 23/563 [00:51<10:40, 1.19s/it] 4%|▍ | 25/563 [00:51<07:24, 1.21it/s] [2024-08-08 16:30:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.2.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
4%|▍ | 25/563 [00:51<07:24, 1.21it/s] 5%|▍ | 26/563 [00:52<06:35, 1.36it/s] [2024-08-08 16:30:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.input_layernorm.weight", shape: (8192,), dtype: float16
5%|▍ | 26/563 [00:52<06:35, 1.36it/s] [2024-08-08 16:30:54] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00002-of-00037.safetensors
5%|▍ | 26/563 [00:52<06:35, 1.36it/s] [2024-08-08 16:30:54] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00006-of-00037.safetensors
5%|▍ | 26/563 [00:52<06:35, 1.36it/s] [2024-08-08 16:30:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.input_layernorm.weight", shape: (8192,), dtype: float16
5%|▍ | 26/563 [00:53<06:35, 1.36it/s] 5%|▍ | 28/563 [00:53<06:37, 1.35it/s] [2024-08-08 16:30:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
5%|▍ | 28/563 [00:54<06:37, 1.35it/s] 5%|β–Œ | 29/563 [00:55<08:25, 1.06it/s] [2024-08-08 16:30:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
5%|β–Œ | 29/563 [00:57<08:25, 1.06it/s] 5%|β–Œ | 30/563 [00:59<14:56, 1.68s/it] [2024-08-08 16:31:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.post_attention_layernorm.weight", shape: (8192,), dtype: float16
5%|β–Œ | 30/563 [00:59<14:56, 1.68s/it] 6%|β–Œ | 31/563 [00:59<11:24, 1.29s/it] [2024-08-08 16:31:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.self_attn.c_attn.bias", shape: (10240,), dtype: float16
6%|β–Œ | 31/563 [00:59<11:24, 1.29s/it] [2024-08-08 16:31:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
6%|β–Œ | 31/563 [00:59<11:24, 1.29s/it] 6%|β–Œ | 33/563 [01:00<07:40, 1.15it/s] [2024-08-08 16:31:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.10.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
6%|β–Œ | 33/563 [01:00<07:40, 1.15it/s] 6%|β–Œ | 34/563 [01:00<06:43, 1.31it/s] [2024-08-08 16:31:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.input_layernorm.weight", shape: (8192,), dtype: float16
6%|β–Œ | 34/563 [01:00<06:43, 1.31it/s] [2024-08-08 16:31:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
6%|β–Œ | 34/563 [01:01<06:43, 1.31it/s] 6%|β–‹ | 36/563 [01:02<06:58, 1.26it/s] [2024-08-08 16:31:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
6%|β–‹ | 36/563 [01:03<06:58, 1.26it/s] 7%|β–‹ | 37/563 [01:06<12:59, 1.48s/it] [2024-08-08 16:31:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.post_attention_layernorm.weight", shape: (8192,), dtype: float16
7%|β–‹ | 37/563 [01:06<12:59, 1.48s/it] 7%|β–‹ | 38/563 [01:06<10:06, 1.15s/it] [2024-08-08 16:31:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.self_attn.c_attn.bias", shape: (10240,), dtype: float16
7%|β–‹ | 38/563 [01:06<10:06, 1.15s/it] [2024-08-08 16:31:08] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
7%|β–‹ | 38/563 [01:06<10:06, 1.15s/it] 7%|β–‹ | 40/563 [01:06<06:59, 1.25it/s] [2024-08-08 16:31:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.11.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
7%|β–‹ | 40/563 [01:06<06:59, 1.25it/s] 7%|β–‹ | 41/563 [01:07<06:15, 1.39it/s] [2024-08-08 16:31:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.input_layernorm.weight", shape: (8192,), dtype: float16
7%|β–‹ | 41/563 [01:07<06:15, 1.39it/s] [2024-08-08 16:31:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.9.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
7%|β–‹ | 41/563 [01:07<06:15, 1.39it/s] 8%|β–Š | 43/563 [01:08<06:38, 1.30it/s] [2024-08-08 16:31:11] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00006-of-00037.safetensors
8%|β–Š | 43/563 [01:08<06:38, 1.30it/s] [2024-08-08 16:31:11] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00007-of-00037.safetensors
8%|β–Š | 43/563 [01:09<06:38, 1.30it/s] [2024-08-08 16:31:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
8%|β–Š | 43/563 [01:11<06:38, 1.30it/s] 8%|β–Š | 44/563 [01:12<12:23, 1.43s/it] [2024-08-08 16:31:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
8%|β–Š | 44/563 [01:14<12:23, 1.43s/it] 8%|β–Š | 45/563 [01:16<17:46, 2.06s/it] [2024-08-08 16:31:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.post_attention_layernorm.weight", shape: (8192,), dtype: float16
8%|β–Š | 45/563 [01:16<17:46, 2.06s/it] 8%|β–Š | 46/563 [01:16<13:29, 1.57s/it] [2024-08-08 16:31:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.self_attn.c_attn.bias", shape: (10240,), dtype: float16
8%|β–Š | 46/563 [01:16<13:29, 1.57s/it] [2024-08-08 16:31:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
8%|β–Š | 46/563 [01:17<13:29, 1.57s/it] 9%|β–Š | 48/563 [01:17<08:49, 1.03s/it] [2024-08-08 16:31:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.12.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
9%|β–Š | 48/563 [01:17<08:49, 1.03s/it] 9%|β–Š | 49/563 [01:17<07:38, 1.12it/s] [2024-08-08 16:31:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.input_layernorm.weight", shape: (8192,), dtype: float16
9%|β–Š | 49/563 [01:17<07:38, 1.12it/s] [2024-08-08 16:31:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
9%|β–Š | 49/563 [01:18<07:38, 1.12it/s] 9%|β–‰ | 51/563 [01:19<07:27, 1.15it/s] [2024-08-08 16:31:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
9%|β–‰ | 51/563 [01:21<07:27, 1.15it/s] 9%|β–‰ | 52/563 [01:23<12:53, 1.51s/it] [2024-08-08 16:31:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.post_attention_layernorm.weight", shape: (8192,), dtype: float16
9%|β–‰ | 52/563 [01:23<12:53, 1.51s/it] 9%|β–‰ | 53/563 [01:23<10:01, 1.18s/it] [2024-08-08 16:31:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.self_attn.c_attn.bias", shape: (10240,), dtype: float16
9%|β–‰ | 53/563 [01:23<10:01, 1.18s/it] [2024-08-08 16:31:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
9%|β–‰ | 53/563 [01:23<10:01, 1.18s/it] 10%|β–‰ | 55/563 [01:24<06:54, 1.23it/s] [2024-08-08 16:31:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.13.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
10%|β–‰ | 55/563 [01:24<06:54, 1.23it/s] 10%|β–‰ | 56/563 [01:24<06:10, 1.37it/s] [2024-08-08 16:31:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.input_layernorm.weight", shape: (8192,), dtype: float16
10%|β–‰ | 56/563 [01:24<06:10, 1.37it/s] [2024-08-08 16:31:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.post_attention_layernorm.weight", shape: (8192,), dtype: float16
10%|β–‰ | 56/563 [01:24<06:10, 1.37it/s] [2024-08-08 16:31:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.self_attn.c_attn.bias", shape: (10240,), dtype: float16
10%|β–‰ | 56/563 [01:24<06:10, 1.37it/s] [2024-08-08 16:31:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
10%|β–‰ | 56/563 [01:24<06:10, 1.37it/s] 11%|β–ˆ | 60/563 [01:25<03:23, 2.48it/s] [2024-08-08 16:31:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
11%|β–ˆ | 60/563 [01:25<03:23, 2.48it/s] 11%|β–ˆ | 61/563 [01:25<03:26, 2.43it/s] [2024-08-08 16:31:28] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00007-of-00037.safetensors
11%|β–ˆ | 61/563 [01:25<03:26, 2.43it/s] [2024-08-08 16:31:28] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00008-of-00037.safetensors
11%|β–ˆ | 61/563 [01:25<03:26, 2.43it/s] [2024-08-08 16:31:30] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
11%|β–ˆ | 61/563 [01:28<03:26, 2.43it/s] 11%|β–ˆ | 62/563 [01:29<08:57, 1.07s/it] [2024-08-08 16:31:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.14.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
11%|β–ˆ | 62/563 [01:30<08:57, 1.07s/it] 11%|β–ˆ | 63/563 [01:33<13:32, 1.62s/it] [2024-08-08 16:31:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.input_layernorm.weight", shape: (8192,), dtype: float16
11%|β–ˆ | 63/563 [01:33<13:32, 1.62s/it] 11%|β–ˆβ– | 64/563 [01:33<10:32, 1.27s/it] [2024-08-08 16:31:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
11%|β–ˆβ– | 64/563 [01:33<10:32, 1.27s/it] 12%|β–ˆβ– | 65/563 [01:34<11:16, 1.36s/it] [2024-08-08 16:31:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
12%|β–ˆβ– | 65/563 [01:36<11:16, 1.36s/it] 12%|β–ˆβ– | 66/563 [01:38<16:15, 1.96s/it] [2024-08-08 16:31:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.post_attention_layernorm.weight", shape: (8192,), dtype: float16
12%|β–ˆβ– | 66/563 [01:38<16:15, 1.96s/it] 12%|β–ˆβ– | 67/563 [01:38<12:02, 1.46s/it] [2024-08-08 16:31:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.self_attn.c_attn.bias", shape: (10240,), dtype: float16
12%|β–ˆβ– | 67/563 [01:38<12:02, 1.46s/it] [2024-08-08 16:31:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
12%|β–ˆβ– | 67/563 [01:38<12:02, 1.46s/it] 12%|β–ˆβ– | 69/563 [01:39<07:46, 1.06it/s] [2024-08-08 16:31:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.15.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
12%|β–ˆβ– | 69/563 [01:39<07:46, 1.06it/s] 12%|β–ˆβ– | 70/563 [01:39<06:45, 1.21it/s] [2024-08-08 16:31:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.input_layernorm.weight", shape: (8192,), dtype: float16
12%|β–ˆβ– | 70/563 [01:39<06:45, 1.21it/s] [2024-08-08 16:31:42] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00009-of-00037.safetensors
12%|β–ˆβ– | 70/563 [01:39<06:45, 1.21it/s] [2024-08-08 16:31:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
12%|β–ˆβ– | 70/563 [01:44<06:45, 1.21it/s] 13%|β–ˆβ–Ž | 72/563 [01:46<15:41, 1.92s/it] [2024-08-08 16:31:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.post_attention_layernorm.weight", shape: (8192,), dtype: float16
13%|β–ˆβ–Ž | 72/563 [01:46<15:41, 1.92s/it] 13%|β–ˆβ–Ž | 73/563 [01:46<12:21, 1.51s/it] [2024-08-08 16:31:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.self_attn.c_attn.bias", shape: (10240,), dtype: float16
13%|β–ˆβ–Ž | 73/563 [01:46<12:21, 1.51s/it] [2024-08-08 16:31:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
13%|β–ˆβ–Ž | 73/563 [01:46<12:21, 1.51s/it] 13%|β–ˆβ–Ž | 75/563 [01:47<08:24, 1.03s/it] [2024-08-08 16:31:49] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
13%|β–ˆβ–Ž | 75/563 [01:47<08:24, 1.03s/it] 13%|β–ˆβ–Ž | 76/563 [01:47<07:17, 1.11it/s] [2024-08-08 16:31:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.16.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
13%|β–ˆβ–Ž | 76/563 [01:48<07:17, 1.11it/s] 14%|β–ˆβ–Ž | 77/563 [01:49<08:39, 1.07s/it] [2024-08-08 16:31:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.input_layernorm.weight", shape: (8192,), dtype: float16
14%|β–ˆβ–Ž | 77/563 [01:49<08:39, 1.07s/it] [2024-08-08 16:31:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
14%|β–ˆβ–Ž | 77/563 [01:49<08:39, 1.07s/it] 14%|β–ˆβ– | 79/563 [01:50<07:46, 1.04it/s] [2024-08-08 16:31:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
14%|β–ˆβ– | 79/563 [01:52<07:46, 1.04it/s] 14%|β–ˆβ– | 80/563 [01:54<12:22, 1.54s/it] [2024-08-08 16:31:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.post_attention_layernorm.weight", shape: (8192,), dtype: float16
14%|β–ˆβ– | 80/563 [01:54<12:22, 1.54s/it] 14%|β–ˆβ– | 81/563 [01:54<09:33, 1.19s/it] [2024-08-08 16:31:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.self_attn.c_attn.bias", shape: (10240,), dtype: float16
14%|β–ˆβ– | 81/563 [01:54<09:33, 1.19s/it] [2024-08-08 16:31:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
14%|β–ˆβ– | 81/563 [01:54<09:33, 1.19s/it] 15%|β–ˆβ– | 83/563 [01:55<06:33, 1.22it/s] [2024-08-08 16:31:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.17.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
15%|β–ˆβ– | 83/563 [01:55<06:33, 1.22it/s] 15%|β–ˆβ– | 84/563 [01:55<05:50, 1.37it/s] [2024-08-08 16:31:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.input_layernorm.weight", shape: (8192,), dtype: float16
15%|β–ˆβ– | 84/563 [01:55<05:50, 1.37it/s] [2024-08-08 16:31:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
15%|β–ˆβ– | 84/563 [01:56<05:50, 1.37it/s] 15%|β–ˆβ–Œ | 86/563 [01:59<08:54, 1.12s/it] [2024-08-08 16:32:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.post_attention_layernorm.weight", shape: (8192,), dtype: float16
15%|β–ˆβ–Œ | 86/563 [01:59<08:54, 1.12s/it] 15%|β–ˆβ–Œ | 87/563 [01:59<07:08, 1.11it/s] [2024-08-08 16:32:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.self_attn.c_attn.bias", shape: (10240,), dtype: float16
15%|β–ˆβ–Œ | 87/563 [01:59<07:08, 1.11it/s] [2024-08-08 16:32:01] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
15%|β–ˆβ–Œ | 87/563 [01:59<07:08, 1.11it/s] 16%|β–ˆβ–Œ | 89/563 [01:59<05:13, 1.51it/s] [2024-08-08 16:32:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
16%|β–ˆβ–Œ | 89/563 [01:59<05:13, 1.51it/s] 16%|β–ˆβ–Œ | 90/563 [02:00<04:48, 1.64it/s] [2024-08-08 16:32:02] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00009-of-00037.safetensors
16%|β–ˆβ–Œ | 90/563 [02:00<04:48, 1.64it/s] [2024-08-08 16:32:02] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00008-of-00037.safetensors
16%|β–ˆβ–Œ | 90/563 [02:00<04:48, 1.64it/s] [2024-08-08 16:32:02] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00010-of-00037.safetensors
16%|β–ˆβ–Œ | 90/563 [02:00<04:48, 1.64it/s] [2024-08-08 16:32:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.18.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
16%|β–ˆβ–Œ | 90/563 [02:02<04:48, 1.64it/s] 16%|β–ˆβ–Œ | 91/563 [02:03<10:13, 1.30s/it] [2024-08-08 16:32:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.input_layernorm.weight", shape: (8192,), dtype: float16
16%|β–ˆβ–Œ | 91/563 [02:03<10:13, 1.30s/it] [2024-08-08 16:32:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
16%|β–ˆβ–Œ | 91/563 [02:04<10:13, 1.30s/it] 17%|β–ˆβ–‹ | 93/563 [02:05<08:36, 1.10s/it] [2024-08-08 16:32:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
17%|β–ˆβ–‹ | 93/563 [02:06<08:36, 1.10s/it] 17%|β–ˆβ–‹ | 94/563 [02:08<12:40, 1.62s/it] [2024-08-08 16:32:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.post_attention_layernorm.weight", shape: (8192,), dtype: float16
17%|β–ˆβ–‹ | 94/563 [02:08<12:40, 1.62s/it] 17%|β–ˆβ–‹ | 95/563 [02:08<09:48, 1.26s/it] [2024-08-08 16:32:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.self_attn.c_attn.bias", shape: (10240,), dtype: float16
17%|β–ˆβ–‹ | 95/563 [02:08<09:48, 1.26s/it] [2024-08-08 16:32:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
17%|β–ˆβ–‹ | 95/563 [02:09<09:48, 1.26s/it] 17%|β–ˆβ–‹ | 97/563 [02:09<06:41, 1.16it/s] [2024-08-08 16:32:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.19.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
17%|β–ˆβ–‹ | 97/563 [02:09<06:41, 1.16it/s] 17%|β–ˆβ–‹ | 98/563 [02:09<05:53, 1.31it/s] [2024-08-08 16:32:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.input_layernorm.weight", shape: (8192,), dtype: float16
17%|β–ˆβ–‹ | 98/563 [02:09<05:53, 1.31it/s] [2024-08-08 16:32:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
17%|β–ˆβ–‹ | 98/563 [02:10<05:53, 1.31it/s] 18%|β–ˆβ–Š | 100/563 [02:11<05:59, 1.29it/s] [2024-08-08 16:32:15] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
18%|β–ˆβ–Š | 100/563 [02:12<05:59, 1.29it/s] 18%|β–ˆβ–Š | 101/563 [02:14<10:31, 1.37s/it] [2024-08-08 16:32:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.post_attention_layernorm.weight", shape: (8192,), dtype: float16
18%|β–ˆβ–Š | 101/563 [02:14<10:31, 1.37s/it] 18%|β–ˆβ–Š | 102/563 [02:15<08:13, 1.07s/it] [2024-08-08 16:32:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.self_attn.c_attn.bias", shape: (10240,), dtype: float16
18%|β–ˆβ–Š | 102/563 [02:15<08:13, 1.07s/it] [2024-08-08 16:32:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
18%|β–ˆβ–Š | 102/563 [02:15<08:13, 1.07s/it] 18%|β–ˆβ–Š | 104/563 [02:15<05:45, 1.33it/s] [2024-08-08 16:32:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.20.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
18%|β–ˆβ–Š | 104/563 [02:15<05:45, 1.33it/s] 19%|β–ˆβ–Š | 105/563 [02:16<05:11, 1.47it/s] [2024-08-08 16:32:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.input_layernorm.weight", shape: (8192,), dtype: float16
19%|β–ˆβ–Š | 105/563 [02:16<05:11, 1.47it/s] [2024-08-08 16:32:18] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00010-of-00037.safetensors
19%|β–ˆβ–Š | 105/563 [02:16<05:11, 1.47it/s] [2024-08-08 16:32:18] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00011-of-00037.safetensors
19%|β–ˆβ–Š | 105/563 [02:16<05:11, 1.47it/s] [2024-08-08 16:32:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
19%|β–ˆβ–Š | 105/563 [02:18<05:11, 1.47it/s] 19%|β–ˆβ–‰ | 107/563 [02:19<08:23, 1.10s/it] [2024-08-08 16:32:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
19%|β–ˆβ–‰ | 107/563 [02:20<08:23, 1.10s/it] 19%|β–ˆβ–‰ | 108/563 [02:23<12:17, 1.62s/it] [2024-08-08 16:32:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.post_attention_layernorm.weight", shape: (8192,), dtype: float16
19%|β–ˆβ–‰ | 108/563 [02:23<12:17, 1.62s/it] 19%|β–ˆβ–‰ | 109/563 [02:23<09:33, 1.26s/it] [2024-08-08 16:32:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.self_attn.c_attn.bias", shape: (10240,), dtype: float16
19%|β–ˆβ–‰ | 109/563 [02:23<09:33, 1.26s/it] [2024-08-08 16:32:25] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
19%|β–ˆβ–‰ | 109/563 [02:23<09:33, 1.26s/it] 20%|β–ˆβ–‰ | 111/563 [02:23<06:32, 1.15it/s] [2024-08-08 16:32:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.21.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
20%|β–ˆβ–‰ | 111/563 [02:23<06:32, 1.15it/s] 20%|β–ˆβ–‰ | 112/563 [02:24<05:46, 1.30it/s] [2024-08-08 16:32:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.input_layernorm.weight", shape: (8192,), dtype: float16
20%|β–ˆβ–‰ | 112/563 [02:24<05:46, 1.30it/s] [2024-08-08 16:32:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
20%|β–ˆβ–‰ | 112/563 [02:24<05:46, 1.30it/s] 20%|β–ˆβ–ˆ | 114/563 [02:25<05:50, 1.28it/s] [2024-08-08 16:32:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
20%|β–ˆβ–ˆ | 114/563 [02:27<05:50, 1.28it/s] 20%|β–ˆβ–ˆ | 115/563 [02:29<10:15, 1.37s/it] [2024-08-08 16:32:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.post_attention_layernorm.weight", shape: (8192,), dtype: float16
20%|β–ˆβ–ˆ | 115/563 [02:29<10:15, 1.37s/it] 21%|β–ˆβ–ˆ | 116/563 [02:29<08:01, 1.08s/it] [2024-08-08 16:32:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.self_attn.c_attn.bias", shape: (10240,), dtype: float16
21%|β–ˆβ–ˆ | 116/563 [02:29<08:01, 1.08s/it] [2024-08-08 16:32:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
21%|β–ˆβ–ˆ | 116/563 [02:29<08:01, 1.08s/it] 21%|β–ˆβ–ˆ | 118/563 [02:29<05:36, 1.32it/s] [2024-08-08 16:32:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.22.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
21%|β–ˆβ–ˆ | 118/563 [02:30<05:36, 1.32it/s] 21%|β–ˆβ–ˆ | 119/563 [02:30<05:02, 1.47it/s] [2024-08-08 16:32:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.input_layernorm.weight", shape: (8192,), dtype: float16
21%|β–ˆβ–ˆ | 119/563 [02:30<05:02, 1.47it/s] [2024-08-08 16:32:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.post_attention_layernorm.weight", shape: (8192,), dtype: float16
21%|β–ˆβ–ˆ | 119/563 [02:30<05:02, 1.47it/s] [2024-08-08 16:32:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.self_attn.c_attn.bias", shape: (10240,), dtype: float16
21%|β–ˆβ–ˆ | 119/563 [02:30<05:02, 1.47it/s] [2024-08-08 16:32:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
21%|β–ˆβ–ˆ | 119/563 [02:30<05:02, 1.47it/s] 22%|β–ˆβ–ˆβ– | 123/563 [02:30<02:47, 2.63it/s] [2024-08-08 16:32:33] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
22%|β–ˆβ–ˆβ– | 123/563 [02:31<02:47, 2.63it/s] 22%|β–ˆβ–ˆβ– | 124/563 [02:31<02:50, 2.58it/s] [2024-08-08 16:32:33] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00011-of-00037.safetensors
22%|β–ˆβ–ˆβ– | 124/563 [02:31<02:50, 2.58it/s] [2024-08-08 16:32:33] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00012-of-00037.safetensors
22%|β–ˆβ–ˆβ– | 124/563 [02:31<02:50, 2.58it/s] [2024-08-08 16:32:36] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
22%|β–ˆβ–ˆβ– | 124/563 [02:34<02:50, 2.58it/s] 22%|β–ˆβ–ˆβ– | 125/563 [02:35<07:27, 1.02s/it] [2024-08-08 16:32:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.23.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
22%|β–ˆβ–ˆβ– | 125/563 [02:36<07:27, 1.02s/it] 22%|β–ˆβ–ˆβ– | 126/563 [02:38<11:19, 1.56s/it] [2024-08-08 16:32:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.input_layernorm.weight", shape: (8192,), dtype: float16
22%|β–ˆβ–ˆβ– | 126/563 [02:38<11:19, 1.56s/it] 23%|β–ˆβ–ˆβ–Ž | 127/563 [02:38<08:49, 1.22s/it] [2024-08-08 16:32:41] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 127/563 [02:39<08:49, 1.22s/it] 23%|β–ˆβ–ˆβ–Ž | 128/563 [02:40<09:27, 1.30s/it] [2024-08-08 16:32:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 128/563 [02:41<09:27, 1.30s/it] 23%|β–ˆβ–ˆβ–Ž | 129/563 [02:43<13:28, 1.86s/it] [2024-08-08 16:32:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.post_attention_layernorm.weight", shape: (8192,), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 129/563 [02:43<13:28, 1.86s/it] 23%|β–ˆβ–ˆβ–Ž | 130/563 [02:43<09:59, 1.39s/it] [2024-08-08 16:32:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.self_attn.c_attn.bias", shape: (10240,), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 130/563 [02:43<09:59, 1.39s/it] [2024-08-08 16:32:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 130/563 [02:43<09:59, 1.39s/it] 23%|β–ˆβ–ˆβ–Ž | 132/563 [02:44<06:27, 1.11it/s] [2024-08-08 16:32:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.24.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
23%|β–ˆβ–ˆβ–Ž | 132/563 [02:44<06:27, 1.11it/s] 24%|β–ˆβ–ˆβ–Ž | 133/563 [02:44<05:38, 1.27it/s] [2024-08-08 16:32:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.input_layernorm.weight", shape: (8192,), dtype: float16
24%|β–ˆβ–ˆβ–Ž | 133/563 [02:44<05:38, 1.27it/s] [2024-08-08 16:32:47] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00013-of-00037.safetensors
24%|β–ˆβ–ˆβ–Ž | 133/563 [02:44<05:38, 1.27it/s] [2024-08-08 16:32:52] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
24%|β–ˆβ–ˆβ–Ž | 133/563 [02:49<05:38, 1.27it/s] 24%|β–ˆβ–ˆβ– | 135/563 [02:51<13:49, 1.94s/it] [2024-08-08 16:32:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.post_attention_layernorm.weight", shape: (8192,), dtype: float16
24%|β–ˆβ–ˆβ– | 135/563 [02:52<13:49, 1.94s/it] 24%|β–ˆβ–ˆβ– | 136/563 [02:52<10:52, 1.53s/it] [2024-08-08 16:32:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.self_attn.c_attn.bias", shape: (10240,), dtype: float16
24%|β–ˆβ–ˆβ– | 136/563 [02:52<10:52, 1.53s/it] [2024-08-08 16:32:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
24%|β–ˆβ–ˆβ– | 136/563 [02:52<10:52, 1.53s/it] 25%|β–ˆβ–ˆβ– | 138/563 [02:52<07:23, 1.04s/it] [2024-08-08 16:32:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
25%|β–ˆβ–ˆβ– | 138/563 [02:52<07:23, 1.04s/it] 25%|β–ˆβ–ˆβ– | 139/563 [02:53<06:25, 1.10it/s] [2024-08-08 16:32:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.25.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
25%|β–ˆβ–ˆβ– | 139/563 [02:53<06:25, 1.10it/s] 25%|β–ˆβ–ˆβ– | 140/563 [02:54<07:30, 1.07s/it] [2024-08-08 16:32:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.input_layernorm.weight", shape: (8192,), dtype: float16
25%|β–ˆβ–ˆβ– | 140/563 [02:54<07:30, 1.07s/it] [2024-08-08 16:32:57] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
25%|β–ˆβ–ˆβ– | 140/563 [02:55<07:30, 1.07s/it] 25%|β–ˆβ–ˆβ–Œ | 142/563 [02:56<06:43, 1.04it/s] [2024-08-08 16:33:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
25%|β–ˆβ–ˆβ–Œ | 142/563 [02:57<06:43, 1.04it/s] 25%|β–ˆβ–ˆβ–Œ | 143/563 [02:59<10:42, 1.53s/it] [2024-08-08 16:33:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.post_attention_layernorm.weight", shape: (8192,), dtype: float16
25%|β–ˆβ–ˆβ–Œ | 143/563 [02:59<10:42, 1.53s/it] 26%|β–ˆβ–ˆβ–Œ | 144/563 [02:59<08:17, 1.19s/it] [2024-08-08 16:33:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.self_attn.c_attn.bias", shape: (10240,), dtype: float16
26%|β–ˆβ–ˆβ–Œ | 144/563 [02:59<08:17, 1.19s/it] [2024-08-08 16:33:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
26%|β–ˆβ–ˆβ–Œ | 144/563 [03:00<08:17, 1.19s/it] 26%|β–ˆβ–ˆβ–Œ | 146/563 [03:00<05:47, 1.20it/s] [2024-08-08 16:33:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.26.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
26%|β–ˆβ–ˆβ–Œ | 146/563 [03:00<05:47, 1.20it/s] 26%|β–ˆβ–ˆβ–Œ | 147/563 [03:01<05:13, 1.33it/s] [2024-08-08 16:33:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.input_layernorm.weight", shape: (8192,), dtype: float16
26%|β–ˆβ–ˆβ–Œ | 147/563 [03:01<05:13, 1.33it/s] [2024-08-08 16:33:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
26%|β–ˆβ–ˆβ–Œ | 147/563 [03:02<05:13, 1.33it/s] 26%|β–ˆβ–ˆβ–‹ | 149/563 [03:04<07:55, 1.15s/it] [2024-08-08 16:33:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.post_attention_layernorm.weight", shape: (8192,), dtype: float16
26%|β–ˆβ–ˆβ–‹ | 149/563 [03:04<07:55, 1.15s/it] 27%|β–ˆβ–ˆβ–‹ | 150/563 [03:04<06:21, 1.08it/s] [2024-08-08 16:33:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.self_attn.c_attn.bias", shape: (10240,), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 150/563 [03:04<06:21, 1.08it/s] [2024-08-08 16:33:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 150/563 [03:04<06:21, 1.08it/s] 27%|β–ˆβ–ˆβ–‹ | 152/563 [03:05<04:37, 1.48it/s] [2024-08-08 16:33:07] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 152/563 [03:05<04:37, 1.48it/s] 27%|β–ˆβ–ˆβ–‹ | 153/563 [03:05<04:14, 1.61it/s] [2024-08-08 16:33:08] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00012-of-00037.safetensors
27%|β–ˆβ–ˆβ–‹ | 153/563 [03:05<04:14, 1.61it/s] [2024-08-08 16:33:08] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00013-of-00037.safetensors
27%|β–ˆβ–ˆβ–‹ | 153/563 [03:05<04:14, 1.61it/s] [2024-08-08 16:33:08] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00014-of-00037.safetensors
27%|β–ˆβ–ˆβ–‹ | 153/563 [03:05<04:14, 1.61it/s] [2024-08-08 16:33:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.27.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 153/563 [03:08<04:14, 1.61it/s] 27%|β–ˆβ–ˆβ–‹ | 154/563 [03:09<09:15, 1.36s/it] [2024-08-08 16:33:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.input_layernorm.weight", shape: (8192,), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 154/563 [03:09<09:15, 1.36s/it] [2024-08-08 16:33:12] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
27%|β–ˆβ–ˆβ–‹ | 154/563 [03:09<09:15, 1.36s/it] 28%|β–ˆβ–ˆβ–Š | 156/563 [03:10<07:41, 1.13s/it] [2024-08-08 16:33:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
28%|β–ˆβ–ˆβ–Š | 156/563 [03:12<07:41, 1.13s/it] 28%|β–ˆβ–ˆβ–Š | 157/563 [03:14<11:11, 1.65s/it] [2024-08-08 16:33:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.post_attention_layernorm.weight", shape: (8192,), dtype: float16
28%|β–ˆβ–ˆβ–Š | 157/563 [03:14<11:11, 1.65s/it] 28%|β–ˆβ–ˆβ–Š | 158/563 [03:14<08:38, 1.28s/it] [2024-08-08 16:33:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.self_attn.c_attn.bias", shape: (10240,), dtype: float16
28%|β–ˆβ–ˆβ–Š | 158/563 [03:14<08:38, 1.28s/it] [2024-08-08 16:33:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
28%|β–ˆβ–ˆβ–Š | 158/563 [03:14<08:38, 1.28s/it] 28%|β–ˆβ–ˆβ–Š | 160/563 [03:15<05:52, 1.14it/s] [2024-08-08 16:33:17] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.28.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
28%|β–ˆβ–ˆβ–Š | 160/563 [03:15<05:52, 1.14it/s] 29%|β–ˆβ–ˆβ–Š | 161/563 [03:15<05:12, 1.29it/s] [2024-08-08 16:33:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.input_layernorm.weight", shape: (8192,), dtype: float16
29%|β–ˆβ–ˆβ–Š | 161/563 [03:15<05:12, 1.29it/s] [2024-08-08 16:33:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
29%|β–ˆβ–ˆβ–Š | 161/563 [03:16<05:12, 1.29it/s] 29%|β–ˆβ–ˆβ–‰ | 163/563 [03:17<05:13, 1.27it/s] [2024-08-08 16:33:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
29%|β–ˆβ–ˆβ–‰ | 163/563 [03:18<05:13, 1.27it/s] 29%|β–ˆβ–ˆβ–‰ | 164/563 [03:20<09:14, 1.39s/it] [2024-08-08 16:33:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.post_attention_layernorm.weight", shape: (8192,), dtype: float16
29%|β–ˆβ–ˆβ–‰ | 164/563 [03:20<09:14, 1.39s/it] 29%|β–ˆβ–ˆβ–‰ | 165/563 [03:20<07:12, 1.09s/it] [2024-08-08 16:33:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.self_attn.c_attn.bias", shape: (10240,), dtype: float16
29%|β–ˆβ–ˆβ–‰ | 165/563 [03:20<07:12, 1.09s/it] [2024-08-08 16:33:23] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
29%|β–ˆβ–ˆβ–‰ | 165/563 [03:21<07:12, 1.09s/it] 30%|β–ˆβ–ˆβ–‰ | 167/563 [03:21<05:01, 1.31it/s] [2024-08-08 16:33:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.29.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
30%|β–ˆβ–ˆβ–‰ | 167/563 [03:21<05:01, 1.31it/s] 30%|β–ˆβ–ˆβ–‰ | 168/563 [03:21<04:30, 1.46it/s] [2024-08-08 16:33:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.input_layernorm.weight", shape: (8192,), dtype: float16
30%|β–ˆβ–ˆβ–‰ | 168/563 [03:21<04:30, 1.46it/s] [2024-08-08 16:33:24] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00014-of-00037.safetensors
30%|β–ˆβ–ˆβ–‰ | 168/563 [03:21<04:30, 1.46it/s] [2024-08-08 16:33:24] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00003-of-00037.safetensors
30%|β–ˆβ–ˆβ–‰ | 168/563 [03:21<04:30, 1.46it/s] [2024-08-08 16:33:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
30%|β–ˆβ–ˆβ–‰ | 168/563 [03:24<04:30, 1.46it/s] 30%|β–ˆβ–ˆβ–ˆ | 170/563 [03:25<07:05, 1.08s/it] [2024-08-08 16:33:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
30%|β–ˆβ–ˆβ–ˆ | 170/563 [03:26<07:05, 1.08s/it] 30%|β–ˆβ–ˆβ–ˆ | 171/563 [03:28<10:35, 1.62s/it] [2024-08-08 16:33:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.post_attention_layernorm.weight", shape: (8192,), dtype: float16
30%|β–ˆβ–ˆβ–ˆ | 171/563 [03:28<10:35, 1.62s/it] 31%|β–ˆβ–ˆβ–ˆ | 172/563 [03:28<08:13, 1.26s/it] [2024-08-08 16:33:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.self_attn.c_attn.bias", shape: (10240,), dtype: float16
31%|β–ˆβ–ˆβ–ˆ | 172/563 [03:28<08:13, 1.26s/it] [2024-08-08 16:33:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
31%|β–ˆβ–ˆβ–ˆ | 172/563 [03:29<08:13, 1.26s/it] 31%|β–ˆβ–ˆβ–ˆ | 174/563 [03:29<05:39, 1.15it/s] [2024-08-08 16:33:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.3.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
31%|β–ˆβ–ˆβ–ˆ | 174/563 [03:29<05:39, 1.15it/s] 31%|β–ˆβ–ˆβ–ˆ | 175/563 [03:29<05:00, 1.29it/s] [2024-08-08 16:33:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.input_layernorm.weight", shape: (8192,), dtype: float16
31%|β–ˆβ–ˆβ–ˆ | 175/563 [03:29<05:00, 1.29it/s] [2024-08-08 16:33:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
31%|β–ˆβ–ˆβ–ˆ | 175/563 [03:30<05:00, 1.29it/s] 31%|β–ˆβ–ˆβ–ˆβ– | 177/563 [03:31<05:04, 1.27it/s] [2024-08-08 16:33:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
31%|β–ˆβ–ˆβ–ˆβ– | 177/563 [03:32<05:04, 1.27it/s] 32%|β–ˆβ–ˆβ–ˆβ– | 178/563 [03:35<09:00, 1.40s/it] [2024-08-08 16:33:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.post_attention_layernorm.weight", shape: (8192,), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 178/563 [03:35<09:00, 1.40s/it] 32%|β–ˆβ–ˆβ–ˆβ– | 179/563 [03:35<07:01, 1.10s/it] [2024-08-08 16:33:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.self_attn.c_attn.bias", shape: (10240,), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 179/563 [03:35<07:01, 1.10s/it] [2024-08-08 16:33:37] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 179/563 [03:35<07:01, 1.10s/it] 32%|β–ˆβ–ˆβ–ˆβ– | 181/563 [03:35<04:56, 1.29it/s] [2024-08-08 16:33:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.4.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 181/563 [03:36<04:56, 1.29it/s] 32%|β–ˆβ–ˆβ–ˆβ– | 182/563 [03:36<04:28, 1.42it/s] [2024-08-08 16:33:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.input_layernorm.weight", shape: (8192,), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 182/563 [03:36<04:28, 1.42it/s] [2024-08-08 16:33:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.post_attention_layernorm.weight", shape: (8192,), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 182/563 [03:36<04:28, 1.42it/s] [2024-08-08 16:33:38] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.self_attn.c_attn.bias", shape: (10240,), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 182/563 [03:36<04:28, 1.42it/s] [2024-08-08 16:33:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
32%|β–ˆβ–ˆβ–ˆβ– | 182/563 [03:36<04:28, 1.42it/s] 33%|β–ˆβ–ˆβ–ˆβ–Ž | 186/563 [03:36<02:29, 2.53it/s] [2024-08-08 16:33:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.5.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
33%|β–ˆβ–ˆβ–ˆβ–Ž | 186/563 [03:37<02:29, 2.53it/s] 33%|β–ˆβ–ˆβ–ˆβ–Ž | 187/563 [03:37<02:31, 2.48it/s] [2024-08-08 16:33:39] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00003-of-00037.safetensors
33%|β–ˆβ–ˆβ–ˆβ–Ž | 187/563 [03:37<02:31, 2.48it/s] [2024-08-08 16:33:40] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00015-of-00037.safetensors
33%|β–ˆβ–ˆβ–ˆβ–Ž | 187/563 [03:37<02:31, 2.48it/s] [2024-08-08 16:33:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
33%|β–ˆβ–ˆβ–ˆβ–Ž | 187/563 [03:40<02:31, 2.48it/s] 33%|β–ˆβ–ˆβ–ˆβ–Ž | 188/563 [03:41<06:28, 1.04s/it] [2024-08-08 16:33:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
33%|β–ˆβ–ˆβ–ˆβ–Ž | 188/563 [03:42<06:28, 1.04s/it] 34%|β–ˆβ–ˆβ–ˆβ–Ž | 189/563 [03:44<09:50, 1.58s/it] [2024-08-08 16:33:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.post_attention_layernorm.weight", shape: (8192,), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ–Ž | 189/563 [03:44<09:50, 1.58s/it] 34%|β–ˆβ–ˆβ–ˆβ–Ž | 190/563 [03:44<07:41, 1.24s/it] [2024-08-08 16:33:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.self_attn.c_attn.bias", shape: (10240,), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ–Ž | 190/563 [03:44<07:41, 1.24s/it] [2024-08-08 16:33:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ–Ž | 190/563 [03:44<07:41, 1.24s/it] 34%|β–ˆβ–ˆβ–ˆβ– | 192/563 [03:45<05:17, 1.17it/s] [2024-08-08 16:33:47] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.30.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ– | 192/563 [03:45<05:17, 1.17it/s] 34%|β–ˆβ–ˆβ–ˆβ– | 193/563 [03:45<04:42, 1.31it/s] [2024-08-08 16:33:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.input_layernorm.weight", shape: (8192,), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ– | 193/563 [03:45<04:42, 1.31it/s] [2024-08-08 16:33:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
34%|β–ˆβ–ˆβ–ˆβ– | 193/563 [03:46<04:42, 1.31it/s] 35%|β–ˆβ–ˆβ–ˆβ– | 195/563 [03:47<04:47, 1.28it/s] [2024-08-08 16:33:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
35%|β–ˆβ–ˆβ–ˆβ– | 195/563 [03:48<04:47, 1.28it/s] 35%|β–ˆβ–ˆβ–ˆβ– | 196/563 [03:50<08:26, 1.38s/it] [2024-08-08 16:33:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.post_attention_layernorm.weight", shape: (8192,), dtype: float16
35%|β–ˆβ–ˆβ–ˆβ– | 196/563 [03:51<08:26, 1.38s/it] 35%|β–ˆβ–ˆβ–ˆβ– | 197/563 [03:51<06:35, 1.08s/it] [2024-08-08 16:33:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.self_attn.c_attn.bias", shape: (10240,), dtype: float16
35%|β–ˆβ–ˆβ–ˆβ– | 197/563 [03:51<06:35, 1.08s/it] [2024-08-08 16:33:53] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
35%|β–ˆβ–ˆβ–ˆβ– | 197/563 [03:51<06:35, 1.08s/it] 35%|β–ˆβ–ˆβ–ˆβ–Œ | 199/563 [03:51<04:36, 1.32it/s] [2024-08-08 16:33:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.31.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
35%|β–ˆβ–ˆβ–ˆβ–Œ | 199/563 [03:51<04:36, 1.32it/s] 36%|β–ˆβ–ˆβ–ˆβ–Œ | 200/563 [03:52<04:09, 1.45it/s] [2024-08-08 16:33:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.input_layernorm.weight", shape: (8192,), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–Œ | 200/563 [03:52<04:09, 1.45it/s] [2024-08-08 16:33:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.post_attention_layernorm.weight", shape: (8192,), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–Œ | 200/563 [03:52<04:09, 1.45it/s] [2024-08-08 16:33:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.self_attn.c_attn.bias", shape: (10240,), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–Œ | 200/563 [03:52<04:09, 1.45it/s] [2024-08-08 16:33:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–Œ | 200/563 [03:52<04:09, 1.45it/s] 36%|β–ˆβ–ˆβ–ˆβ–Œ | 204/563 [03:52<02:19, 2.58it/s] [2024-08-08 16:33:55] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–Œ | 204/563 [03:52<02:19, 2.58it/s] 36%|β–ˆβ–ˆβ–ˆβ–‹ | 205/563 [03:53<02:21, 2.53it/s] [2024-08-08 16:33:55] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00015-of-00037.safetensors
36%|β–ˆβ–ˆβ–ˆβ–‹ | 205/563 [03:53<02:21, 2.53it/s] [2024-08-08 16:33:55] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00016-of-00037.safetensors
36%|β–ˆβ–ˆβ–ˆβ–‹ | 205/563 [03:53<02:21, 2.53it/s] [2024-08-08 16:33:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
36%|β–ˆβ–ˆβ–ˆβ–‹ | 205/563 [03:55<02:21, 2.53it/s] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 206/563 [03:57<06:25, 1.08s/it] [2024-08-08 16:34:00] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.32.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 206/563 [03:58<06:25, 1.08s/it] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 207/563 [04:00<09:45, 1.64s/it] [2024-08-08 16:34:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.input_layernorm.weight", shape: (8192,), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 207/563 [04:00<09:45, 1.64s/it] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 208/563 [04:00<07:36, 1.29s/it] [2024-08-08 16:34:03] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 208/563 [04:01<07:36, 1.29s/it] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 209/563 [04:02<08:13, 1.39s/it] [2024-08-08 16:34:06] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 209/563 [04:04<08:13, 1.39s/it] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 210/563 [04:06<12:15, 2.08s/it] [2024-08-08 16:34:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.post_attention_layernorm.weight", shape: (8192,), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 210/563 [04:06<12:15, 2.08s/it] 37%|β–ˆβ–ˆβ–ˆβ–‹ | 211/563 [04:06<09:04, 1.55s/it] [2024-08-08 16:34:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.self_attn.c_attn.bias", shape: (10240,), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 211/563 [04:06<09:04, 1.55s/it] [2024-08-08 16:34:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
37%|β–ˆβ–ˆβ–ˆβ–‹ | 211/563 [04:06<09:04, 1.55s/it] 38%|β–ˆβ–ˆβ–ˆβ–Š | 213/563 [04:07<05:53, 1.01s/it] [2024-08-08 16:34:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.33.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
38%|β–ˆβ–ˆβ–ˆβ–Š | 213/563 [04:07<05:53, 1.01s/it] 38%|β–ˆβ–ˆβ–ˆβ–Š | 214/563 [04:07<05:09, 1.13it/s] [2024-08-08 16:34:10] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.input_layernorm.weight", shape: (8192,), dtype: float16
38%|β–ˆβ–ˆβ–ˆβ–Š | 214/563 [04:07<05:09, 1.13it/s] [2024-08-08 16:34:10] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00017-of-00037.safetensors
38%|β–ˆβ–ˆβ–ˆβ–Š | 214/563 [04:07<05:09, 1.13it/s] [2024-08-08 16:34:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
38%|β–ˆβ–ˆβ–ˆβ–Š | 214/563 [04:13<05:09, 1.13it/s] 38%|β–ˆβ–ˆβ–ˆβ–Š | 216/563 [04:15<12:28, 2.16s/it] [2024-08-08 16:34:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.post_attention_layernorm.weight", shape: (8192,), dtype: float16
38%|β–ˆβ–ˆβ–ˆβ–Š | 216/563 [04:15<12:28, 2.16s/it] 39%|β–ˆβ–ˆβ–ˆβ–Š | 217/563 [04:16<09:48, 1.70s/it] [2024-08-08 16:34:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.self_attn.c_attn.bias", shape: (10240,), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–Š | 217/563 [04:16<09:48, 1.70s/it] [2024-08-08 16:34:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–Š | 217/563 [04:16<09:48, 1.70s/it] 39%|β–ˆβ–ˆβ–ˆβ–‰ | 219/563 [04:16<06:36, 1.15s/it] [2024-08-08 16:34:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–‰ | 219/563 [04:16<06:36, 1.15s/it] 39%|β–ˆβ–ˆβ–ˆβ–‰ | 220/563 [04:17<05:43, 1.00s/it] [2024-08-08 16:34:20] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.34.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–‰ | 220/563 [04:17<05:43, 1.00s/it] 39%|β–ˆβ–ˆβ–ˆβ–‰ | 221/563 [04:18<06:36, 1.16s/it] [2024-08-08 16:34:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.input_layernorm.weight", shape: (8192,), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–‰ | 221/563 [04:18<06:36, 1.16s/it] [2024-08-08 16:34:21] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
39%|β–ˆβ–ˆβ–ˆβ–‰ | 221/563 [04:19<06:36, 1.16s/it] 40%|β–ˆβ–ˆβ–ˆβ–‰ | 223/563 [04:20<06:21, 1.12s/it] [2024-08-08 16:34:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–‰ | 223/563 [04:22<06:21, 1.12s/it] 40%|β–ˆβ–ˆβ–ˆβ–‰ | 224/563 [04:24<09:29, 1.68s/it] [2024-08-08 16:34:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.post_attention_layernorm.weight", shape: (8192,), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–‰ | 224/563 [04:24<09:29, 1.68s/it] 40%|β–ˆβ–ˆβ–ˆβ–‰ | 225/563 [04:24<07:19, 1.30s/it] [2024-08-08 16:34:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.self_attn.c_attn.bias", shape: (10240,), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–‰ | 225/563 [04:24<07:19, 1.30s/it] [2024-08-08 16:34:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–‰ | 225/563 [04:24<07:19, 1.30s/it] 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 227/563 [04:25<04:59, 1.12it/s] [2024-08-08 16:34:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.35.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 227/563 [04:25<04:59, 1.12it/s] 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 228/563 [04:25<04:23, 1.27it/s] [2024-08-08 16:34:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.input_layernorm.weight", shape: (8192,), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 228/563 [04:25<04:23, 1.27it/s] [2024-08-08 16:34:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
40%|β–ˆβ–ˆβ–ˆβ–ˆ | 228/563 [04:27<04:23, 1.27it/s] 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 230/563 [04:29<06:34, 1.19s/it] [2024-08-08 16:34:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.post_attention_layernorm.weight", shape: (8192,), dtype: float16
41%|β–ˆβ–ˆβ–ˆβ–ˆ | 230/563 [04:29<06:34, 1.19s/it] 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 231/563 [04:29<05:16, 1.05it/s] [2024-08-08 16:34:31] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.self_attn.c_attn.bias", shape: (10240,), dtype: float16
41%|β–ˆβ–ˆβ–ˆβ–ˆ | 231/563 [04:29<05:16, 1.05it/s] [2024-08-08 16:34:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
41%|β–ˆβ–ˆβ–ˆβ–ˆ | 231/563 [04:29<05:16, 1.05it/s] 41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 233/563 [04:29<03:50, 1.43it/s] [2024-08-08 16:34:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 233/563 [04:30<03:50, 1.43it/s] 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 234/563 [04:30<03:33, 1.54it/s] [2024-08-08 16:34:32] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00016-of-00037.safetensors
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 234/563 [04:30<03:33, 1.54it/s] [2024-08-08 16:34:33] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00017-of-00037.safetensors
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 234/563 [04:30<03:33, 1.54it/s] [2024-08-08 16:34:33] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00018-of-00037.safetensors
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 234/563 [04:30<03:33, 1.54it/s] [2024-08-08 16:34:35] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.36.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 234/563 [04:33<03:33, 1.54it/s] 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 235/563 [04:37<11:15, 2.06s/it] [2024-08-08 16:34:39] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.input_layernorm.weight", shape: (8192,), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 235/563 [04:37<11:15, 2.06s/it] [2024-08-08 16:34:40] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 235/563 [04:37<11:15, 2.06s/it] 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 237/563 [04:38<08:34, 1.58s/it] [2024-08-08 16:34:42] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 237/563 [04:40<08:34, 1.58s/it] 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 238/563 [04:42<10:42, 1.98s/it] [2024-08-08 16:34:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.post_attention_layernorm.weight", shape: (8192,), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 238/563 [04:42<10:42, 1.98s/it] 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 239/563 [04:42<08:15, 1.53s/it] [2024-08-08 16:34:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.self_attn.c_attn.bias", shape: (10240,), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 239/563 [04:42<08:15, 1.53s/it] [2024-08-08 16:34:44] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 239/563 [04:42<08:15, 1.53s/it] 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 241/563 [04:42<05:26, 1.01s/it] [2024-08-08 16:34:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.37.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 241/563 [04:42<05:26, 1.01s/it] 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 242/563 [04:43<04:41, 1.14it/s] [2024-08-08 16:34:45] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.input_layernorm.weight", shape: (8192,), dtype: float16
43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 242/563 [04:43<04:41, 1.14it/s] [2024-08-08 16:34:46] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 242/563 [04:43<04:41, 1.14it/s] 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 244/563 [04:44<04:24, 1.20it/s] [2024-08-08 16:34:48] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 244/563 [04:46<04:24, 1.20it/s] 44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 245/563 [04:48<07:15, 1.37s/it] [2024-08-08 16:34:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.post_attention_layernorm.weight", shape: (8192,), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 245/563 [04:48<07:15, 1.37s/it] 44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 246/563 [04:48<05:39, 1.07s/it] [2024-08-08 16:34:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.self_attn.c_attn.bias", shape: (10240,), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 246/563 [04:48<05:39, 1.07s/it] [2024-08-08 16:34:50] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 246/563 [04:48<05:39, 1.07s/it] 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 248/563 [04:48<03:54, 1.34it/s] [2024-08-08 16:34:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.38.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 248/563 [04:48<03:54, 1.34it/s] 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 249/563 [04:49<03:29, 1.50it/s] [2024-08-08 16:34:51] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.input_layernorm.weight", shape: (8192,), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 249/563 [04:49<03:29, 1.50it/s] [2024-08-08 16:34:51] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00018-of-00037.safetensors
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 249/563 [04:49<03:29, 1.50it/s] [2024-08-08 16:34:51] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00019-of-00037.safetensors
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 249/563 [04:49<03:29, 1.50it/s] [2024-08-08 16:34:54] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 249/563 [04:51<03:29, 1.50it/s] 45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 251/563 [04:52<05:36, 1.08s/it] [2024-08-08 16:34:56] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 251/563 [04:53<05:36, 1.08s/it] 45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 252/563 [04:55<08:06, 1.56s/it] [2024-08-08 16:34:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.post_attention_layernorm.weight", shape: (8192,), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 252/563 [04:55<08:06, 1.56s/it] 45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 253/563 [04:55<06:18, 1.22s/it] [2024-08-08 16:34:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.self_attn.c_attn.bias", shape: (10240,), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 253/563 [04:55<06:18, 1.22s/it] [2024-08-08 16:34:58] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 253/563 [04:56<06:18, 1.22s/it] 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 255/563 [04:56<04:17, 1.20it/s] [2024-08-08 16:34:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.39.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 255/563 [04:56<04:17, 1.20it/s] 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 256/563 [04:56<03:46, 1.36it/s] [2024-08-08 16:34:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.input_layernorm.weight", shape: (8192,), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 256/563 [04:56<03:46, 1.36it/s] [2024-08-08 16:34:59] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 256/563 [04:57<03:46, 1.36it/s] 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 258/563 [04:58<03:45, 1.35it/s] [2024-08-08 16:35:02] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 258/563 [04:59<03:45, 1.35it/s] 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 259/563 [05:01<06:33, 1.30s/it] [2024-08-08 16:35:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.post_attention_layernorm.weight", shape: (8192,), dtype: float16
46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 259/563 [05:01<06:33, 1.30s/it] 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 260/563 [05:01<05:07, 1.01s/it] [2024-08-08 16:35:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.self_attn.c_attn.bias", shape: (10240,), dtype: float16
46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 260/563 [05:01<05:07, 1.01s/it] [2024-08-08 16:35:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 260/563 [05:01<05:07, 1.01s/it] 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 262/563 [05:02<03:34, 1.40it/s] [2024-08-08 16:35:04] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.40.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 262/563 [05:02<03:34, 1.40it/s] 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 263/563 [05:02<03:12, 1.56it/s] [2024-08-08 16:35:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.input_layernorm.weight", shape: (8192,), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 263/563 [05:02<03:12, 1.56it/s] [2024-08-08 16:35:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.post_attention_layernorm.weight", shape: (8192,), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 263/563 [05:02<03:12, 1.56it/s] [2024-08-08 16:35:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.self_attn.c_attn.bias", shape: (10240,), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 263/563 [05:02<03:12, 1.56it/s] [2024-08-08 16:35:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 263/563 [05:02<03:12, 1.56it/s] 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 267/563 [05:03<01:45, 2.80it/s] [2024-08-08 16:35:05] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 267/563 [05:03<01:45, 2.80it/s] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 268/563 [05:03<01:47, 2.74it/s] [2024-08-08 16:35:06] INFO huggingface_loader.py:197: Unloading HF weight file: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00019-of-00037.safetensors
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 268/563 [05:03<01:47, 2.74it/s] [2024-08-08 16:35:06] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00020-of-00037.safetensors
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 268/563 [05:03<01:47, 2.74it/s] [2024-08-08 16:35:09] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 268/563 [05:06<01:47, 2.74it/s] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 269/563 [05:07<05:20, 1.09s/it] [2024-08-08 16:35:11] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.41.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 269/563 [05:09<05:20, 1.09s/it] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 270/563 [05:11<07:39, 1.57s/it] [2024-08-08 16:35:13] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.input_layernorm.weight", shape: (8192,), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 270/563 [05:11<07:39, 1.57s/it] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 271/563 [05:11<05:57, 1.22s/it] [2024-08-08 16:35:14] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 271/563 [05:11<05:57, 1.22s/it] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 272/563 [05:12<06:14, 1.29s/it] [2024-08-08 16:35:16] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 272/563 [05:14<06:14, 1.29s/it] 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 273/563 [05:16<08:50, 1.83s/it] [2024-08-08 16:35:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.post_attention_layernorm.weight", shape: (8192,), dtype: float16
48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 273/563 [05:16<08:50, 1.83s/it] 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 274/563 [05:16<06:33, 1.36s/it] [2024-08-08 16:35:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.self_attn.c_attn.bias", shape: (10240,), dtype: float16
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 274/563 [05:16<06:33, 1.36s/it] [2024-08-08 16:35:18] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 274/563 [05:16<06:33, 1.36s/it] 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 276/563 [05:16<04:13, 1.13it/s] [2024-08-08 16:35:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.42.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 276/563 [05:16<04:13, 1.13it/s] 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 277/563 [05:17<03:39, 1.30it/s] [2024-08-08 16:35:19] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.input_layernorm.weight", shape: (8192,), dtype: float16
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 277/563 [05:17<03:39, 1.30it/s] [2024-08-08 16:35:19] INFO huggingface_loader.py:185: Loading HF parameters from: /Users/Shared/models/Qwen2-Math-72B-Instruct/model-00021-of-00037.safetensors
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 277/563 [05:17<03:39, 1.30it/s] [2024-08-08 16:35:24] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 277/563 [05:22<03:39, 1.30it/s] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 279/563 [05:24<08:52, 1.88s/it] [2024-08-08 16:35:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.post_attention_layernorm.weight", shape: (8192,), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 279/563 [05:24<08:52, 1.88s/it] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 280/563 [05:24<06:58, 1.48s/it] [2024-08-08 16:35:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.self_attn.c_attn.bias", shape: (10240,), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 280/563 [05:24<06:58, 1.48s/it] [2024-08-08 16:35:26] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.self_attn.c_attn.weight", shape: (10240, 8192), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 280/563 [05:24<06:58, 1.48s/it] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 282/563 [05:24<04:43, 1.01s/it] [2024-08-08 16:35:27] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.self_attn.o_proj.weight", shape: (8192, 8192), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 282/563 [05:24<04:43, 1.01s/it] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 283/563 [05:25<04:05, 1.14it/s] [2024-08-08 16:35:28] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.43.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 283/563 [05:25<04:05, 1.14it/s] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 284/563 [05:26<04:48, 1.03s/it] [2024-08-08 16:35:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.44.input_layernorm.weight", shape: (8192,), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 284/563 [05:26<04:48, 1.03s/it] [2024-08-08 16:35:29] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.44.mlp.down_proj.weight", shape: (8192, 29568), dtype: float16
50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 284/563 [05:27<04:48, 1.03s/it] 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 286/563 [05:28<04:21, 1.06it/s] [2024-08-08 16:35:32] INFO huggingface_loader.py:175: [Not quantized] Parameter: "model.layers.44.mlp.gate_up_proj.weight", shape: (59136, 8192), dtype: float16
51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 286/563 [05:29<04:21, 1.06it/s]Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/cfruan/Documents/mlc-llm/python/mlc_llm/__main__.py", line 64, in <module>
main()
File "/Users/cfruan/Documents/mlc-llm/python/mlc_llm/__main__.py", line 37, in main
cli.main(sys.argv[2:])
File "/Users/cfruan/Documents/mlc-llm/python/mlc_llm/cli/convert_weight.py", line 88, in main
convert_weight(
File "/Users/cfruan/Documents/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 181, in convert_weight
_convert_args(args)
File "/Users/cfruan/Documents/mlc-llm/python/mlc_llm/interface/convert_weight.py", line 145, in _convert_args
tvmjs.dump_ndarray_cache(
File "/Users/cfruan/Documents/tvm/python/tvm/contrib/tvmjs.py", line 296, in dump_ndarray_cache
shard_manager.append_or_update(
File "/Users/cfruan/Documents/tvm/python/tvm/contrib/tvmjs.py", line 143, in append_or_update
self._commit_internal(data, [rec])
File "/Users/cfruan/Documents/tvm/python/tvm/contrib/tvmjs.py", line 184, in _commit_internal
outfile.write(data)
OSError: [Errno 28] No space left on device
51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 286/563 [05:30<05:19, 1.15s/it]