Mistral-Phy / running_log.txt
SriSanth2345's picture
Upload 16 files
27083c4 verified
[INFO|parser.py:355] 2024-08-29 20:29:00,103 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: False, compute dtype: torch.float16
[INFO|tokenization_utils_base.py:2289] 2024-08-29 20:29:00,279 >> loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/tokenizer.model
[INFO|tokenization_utils_base.py:2289] 2024-08-29 20:29:00,280 >> loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/tokenizer.json
[INFO|tokenization_utils_base.py:2289] 2024-08-29 20:29:00,280 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2289] 2024-08-29 20:29:00,280 >> loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/special_tokens_map.json
[INFO|tokenization_utils_base.py:2289] 2024-08-29 20:29:00,280 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/tokenizer_config.json
[INFO|template.py:373] 2024-08-29 20:29:00,325 >> Add pad token: </s>
[INFO|loader.py:52] 2024-08-29 20:29:00,326 >> Loading dataset HydraLM/physics_dataset_alpaca...
[INFO|configuration_utils.py:733] 2024-08-29 20:29:03,695 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/config.json
[INFO|configuration_utils.py:800] 2024-08-29 20:29:03,696 >> Model config MistralConfig {
"_name_or_path": "mistralai/Mistral-7B-Instruct-v0.1",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.4",
"use_cache": true,
"vocab_size": 32000
}
[INFO|modeling_utils.py:3644] 2024-08-29 20:29:03,719 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/model.safetensors.index.json
[INFO|modeling_utils.py:1572] 2024-08-29 20:29:03,720 >> Instantiating MistralForCausalLM model under default dtype torch.float16.
[INFO|configuration_utils.py:1038] 2024-08-29 20:29:03,721 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
[INFO|modeling_utils.py:4473] 2024-08-29 20:29:30,412 >> All model checkpoint weights were used when initializing MistralForCausalLM.
[INFO|modeling_utils.py:4481] 2024-08-29 20:29:30,412 >> All the weights of MistralForCausalLM were initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.1.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MistralForCausalLM for predictions without further training.
[INFO|configuration_utils.py:993] 2024-08-29 20:29:30,511 >> loading configuration file generation_config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/generation_config.json
[INFO|configuration_utils.py:1038] 2024-08-29 20:29:30,512 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
[INFO|checkpointing.py:103] 2024-08-29 20:29:30,518 >> Gradient checkpointing enabled.
[INFO|attention.py:84] 2024-08-29 20:29:30,519 >> Using torch SDPA for faster training and inference.
[INFO|adapter.py:302] 2024-08-29 20:29:30,519 >> Upcasting trainable params to float32.
[INFO|adapter.py:158] 2024-08-29 20:29:30,519 >> Fine-tuning method: LoRA
[INFO|misc.py:51] 2024-08-29 20:29:30,519 >> Found linear modules: q_proj,down_proj,up_proj,k_proj,o_proj,gate_proj,v_proj
[INFO|loader.py:196] 2024-08-29 20:29:46,150 >> trainable params: 20,971,520 || all params: 7,262,703,616 || trainable%: 0.2888
[INFO|trainer.py:648] 2024-08-29 20:29:46,160 >> Using auto half precision backend
[INFO|trainer.py:2134] 2024-08-29 20:29:46,579 >> ***** Running training *****
[INFO|trainer.py:2135] 2024-08-29 20:29:46,579 >> Num examples = 10,000
[INFO|trainer.py:2136] 2024-08-29 20:29:46,579 >> Num Epochs = 1
[INFO|trainer.py:2137] 2024-08-29 20:29:46,579 >> Instantaneous batch size per device = 8
[INFO|trainer.py:2140] 2024-08-29 20:29:46,579 >> Total train batch size (w. parallel, distributed & accumulation) = 64
[INFO|trainer.py:2141] 2024-08-29 20:29:46,579 >> Gradient Accumulation steps = 8
[INFO|trainer.py:2142] 2024-08-29 20:29:46,579 >> Total optimization steps = 156
[INFO|trainer.py:2143] 2024-08-29 20:29:46,584 >> Number of trainable parameters = 20,971,520
[INFO|callbacks.py:319] 2024-08-29 20:30:43,585 >> {'loss': 0.6780, 'learning_rate': 2.9924e-05, 'epoch': 0.03, 'throughput': 2874.51}
[INFO|callbacks.py:319] 2024-08-29 20:31:40,281 >> {'loss': 0.6551, 'learning_rate': 2.9697e-05, 'epoch': 0.06, 'throughput': 2882.14}
[INFO|callbacks.py:319] 2024-08-29 20:32:36,903 >> {'loss': 0.6372, 'learning_rate': 2.9321e-05, 'epoch': 0.10, 'throughput': 2885.94}
[INFO|callbacks.py:319] 2024-08-29 20:33:33,631 >> {'loss': 0.6201, 'learning_rate': 2.8800e-05, 'epoch': 0.13, 'throughput': 2886.5}
[INFO|callbacks.py:319] 2024-08-29 20:34:30,081 >> {'loss': 0.5965, 'learning_rate': 2.8139e-05, 'epoch': 0.16, 'throughput': 2889.67}
[INFO|callbacks.py:319] 2024-08-29 20:35:26,826 >> {'loss': 0.5959, 'learning_rate': 2.7345e-05, 'epoch': 0.19, 'throughput': 2889.27}
[INFO|callbacks.py:319] 2024-08-29 20:36:23,284 >> {'loss': 0.6006, 'learning_rate': 2.6426e-05, 'epoch': 0.22, 'throughput': 2891.08}
[INFO|callbacks.py:319] 2024-08-29 20:37:19,827 >> {'loss': 0.5940, 'learning_rate': 2.5391e-05, 'epoch': 0.26, 'throughput': 2891.9}
[INFO|callbacks.py:319] 2024-08-29 20:38:16,461 >> {'loss': 0.5825, 'learning_rate': 2.4251e-05, 'epoch': 0.29, 'throughput': 2892.01}
[INFO|callbacks.py:319] 2024-08-29 20:39:13,245 >> {'loss': 0.5871, 'learning_rate': 2.3017e-05, 'epoch': 0.32, 'throughput': 2891.34}
[INFO|callbacks.py:319] 2024-08-29 20:40:10,089 >> {'loss': 0.5771, 'learning_rate': 2.1702e-05, 'epoch': 0.35, 'throughput': 2890.51}
[INFO|callbacks.py:319] 2024-08-29 20:41:06,524 >> {'loss': 0.5544, 'learning_rate': 2.0319e-05, 'epoch': 0.38, 'throughput': 2891.57}
[INFO|callbacks.py:319] 2024-08-29 20:42:03,054 >> {'loss': 0.5768, 'learning_rate': 1.8882e-05, 'epoch': 0.42, 'throughput': 2892.08}
[INFO|callbacks.py:319] 2024-08-29 20:42:59,850 >> {'loss': 0.5534, 'learning_rate': 1.7406e-05, 'epoch': 0.45, 'throughput': 2891.55}
[INFO|callbacks.py:319] 2024-08-29 20:43:56,581 >> {'loss': 0.5597, 'learning_rate': 1.5906e-05, 'epoch': 0.48, 'throughput': 2891.32}
[INFO|callbacks.py:319] 2024-08-29 20:44:53,211 >> {'loss': 0.5710, 'learning_rate': 1.4396e-05, 'epoch': 0.51, 'throughput': 2891.43}
[INFO|callbacks.py:319] 2024-08-29 20:45:49,959 >> {'loss': 0.5572, 'learning_rate': 1.2892e-05, 'epoch': 0.54, 'throughput': 2891.18}
[INFO|callbacks.py:319] 2024-08-29 20:46:46,195 >> {'loss': 0.5686, 'learning_rate': 1.1410e-05, 'epoch': 0.58, 'throughput': 2892.41}
[INFO|callbacks.py:319] 2024-08-29 20:47:42,985 >> {'loss': 0.5525, 'learning_rate': 9.9644e-06, 'epoch': 0.61, 'throughput': 2892.02}
[INFO|callbacks.py:319] 2024-08-29 20:48:39,767 >> {'loss': 0.5594, 'learning_rate': 8.5696e-06, 'epoch': 0.64, 'throughput': 2891.69}
[INFO|trainer.py:3503] 2024-08-29 20:48:39,768 >> Saving model checkpoint to saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-100
[INFO|configuration_utils.py:733] 2024-08-29 20:48:40,107 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/config.json
[INFO|configuration_utils.py:800] 2024-08-29 20:48:40,107 >> Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.4",
"use_cache": true,
"vocab_size": 32000
}
[INFO|tokenization_utils_base.py:2702] 2024-08-29 20:48:40,206 >> tokenizer config file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-100/tokenizer_config.json
[INFO|tokenization_utils_base.py:2711] 2024-08-29 20:48:40,207 >> Special tokens file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-100/special_tokens_map.json
[INFO|callbacks.py:319] 2024-08-29 20:49:36,953 >> {'loss': 0.5709, 'learning_rate': 7.2399e-06, 'epoch': 0.67, 'throughput': 2890.41}
[INFO|callbacks.py:319] 2024-08-29 20:50:33,412 >> {'loss': 0.5555, 'learning_rate': 5.9889e-06, 'epoch': 0.70, 'throughput': 2890.93}
[INFO|callbacks.py:319] 2024-08-29 20:51:29,876 >> {'loss': 0.5597, 'learning_rate': 4.8291e-06, 'epoch': 0.74, 'throughput': 2891.4}
[INFO|callbacks.py:319] 2024-08-29 20:52:26,764 >> {'loss': 0.5432, 'learning_rate': 3.7723e-06, 'epoch': 0.77, 'throughput': 2890.92}
[INFO|callbacks.py:319] 2024-08-29 20:53:23,270 >> {'loss': 0.5363, 'learning_rate': 2.8293e-06, 'epoch': 0.80, 'throughput': 2891.26}
[INFO|callbacks.py:319] 2024-08-29 20:54:19,827 >> {'loss': 0.5582, 'learning_rate': 2.0096e-06, 'epoch': 0.83, 'throughput': 2891.48}
[INFO|callbacks.py:319] 2024-08-29 20:55:16,550 >> {'loss': 0.5539, 'learning_rate': 1.3215e-06, 'epoch': 0.86, 'throughput': 2891.37}
[INFO|callbacks.py:319] 2024-08-29 20:56:13,084 >> {'loss': 0.5529, 'learning_rate': 7.7195e-07, 'epoch': 0.90, 'throughput': 2891.6}
[INFO|callbacks.py:319] 2024-08-29 20:57:09,602 >> {'loss': 0.5584, 'learning_rate': 3.6654e-07, 'epoch': 0.93, 'throughput': 2891.86}
[INFO|callbacks.py:319] 2024-08-29 20:58:06,181 >> {'loss': 0.5503, 'learning_rate': 1.0937e-07, 'epoch': 0.96, 'throughput': 2891.99}
[INFO|callbacks.py:319] 2024-08-29 20:59:02,816 >> {'loss': 0.5549, 'learning_rate': 3.0416e-09, 'epoch': 0.99, 'throughput': 2892.02}
[INFO|trainer.py:3503] 2024-08-29 20:59:14,245 >> Saving model checkpoint to saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-156
[INFO|configuration_utils.py:733] 2024-08-29 20:59:14,599 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/config.json
[INFO|configuration_utils.py:800] 2024-08-29 20:59:14,600 >> Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.4",
"use_cache": true,
"vocab_size": 32000
}
[INFO|tokenization_utils_base.py:2702] 2024-08-29 20:59:14,671 >> tokenizer config file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-156/tokenizer_config.json
[INFO|tokenization_utils_base.py:2711] 2024-08-29 20:59:14,671 >> Special tokens file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/checkpoint-156/special_tokens_map.json
[INFO|trainer.py:2394] 2024-08-29 20:59:14,879 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:3503] 2024-08-29 20:59:14,882 >> Saving model checkpoint to saves/Mistral-7B-v0.1-Chat/lora/mistral_physs
[INFO|configuration_utils.py:733] 2024-08-29 20:59:15,109 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.1/snapshots/2dcff66eac0c01dc50e4c41eea959968232187fe/config.json
[INFO|configuration_utils.py:800] 2024-08-29 20:59:15,110 >> Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.43.4",
"use_cache": true,
"vocab_size": 32000
}
[INFO|tokenization_utils_base.py:2702] 2024-08-29 20:59:15,181 >> tokenizer config file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/tokenizer_config.json
[INFO|tokenization_utils_base.py:2711] 2024-08-29 20:59:15,182 >> Special tokens file saved in saves/Mistral-7B-v0.1-Chat/lora/mistral_physs/special_tokens_map.json
[WARNING|ploting.py:89] 2024-08-29 20:59:15,318 >> No metric eval_loss to plot.
[WARNING|ploting.py:89] 2024-08-29 20:59:15,318 >> No metric eval_accuracy to plot.
[INFO|modelcard.py:449] 2024-08-29 20:59:15,320 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}