File size: 10,086 Bytes
5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d 5e18707 62c989d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
[INFO|tokenization_utils_base.py:2159] 2024-07-16 09:45:07,144 >> loading file tokenizer.json [INFO|tokenization_utils_base.py:2159] 2024-07-16 09:45:07,144 >> loading file added_tokens.json [INFO|tokenization_utils_base.py:2159] 2024-07-16 09:45:07,144 >> loading file special_tokens_map.json [INFO|tokenization_utils_base.py:2159] 2024-07-16 09:45:07,144 >> loading file tokenizer_config.json [INFO|loader.py:50] 2024-07-16 09:45:07,193 >> Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 7, device: cuda:7, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 6, device: cuda:6, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:07 - INFO - llamafactory.hparams.parser - Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: None 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... 07/16/2024 09:45:08 - INFO - llamafactory.data.loader - Loading dataset 0716_truthfulqa_benchmark_test.json... [INFO|configuration_utils.py:731] 2024-07-16 09:45:10,115 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-16-09-05-28_llama2/config.json [INFO|configuration_utils.py:800] 2024-07-16 09:45:10,118 >> Model config LlamaConfig { "_name_or_path": "saves/LLaMA2-7B-Chat/full/train_2024-07-16-09-05-28_llama2", "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 4096, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.42.3", "use_cache": false, "vocab_size": 32000 } [INFO|patcher.py:81] 2024-07-16 09:45:10,118 >> Using KV cache for faster generation. [INFO|modeling_utils.py:3553] 2024-07-16 09:45:10,162 >> loading weights file saves/LLaMA2-7B-Chat/full/train_2024-07-16-09-05-28_llama2/model.safetensors.index.json [INFO|modeling_utils.py:1531] 2024-07-16 09:45:10,162 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16. [INFO|configuration_utils.py:1000] 2024-07-16 09:45:10,163 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2 } 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. 07/16/2024 09:45:10 - INFO - llamafactory.model.patcher - Using KV cache for faster generation. [INFO|modeling_utils.py:4364] 2024-07-16 09:45:13,329 >> All model checkpoint weights were used when initializing LlamaForCausalLM. [INFO|modeling_utils.py:4372] 2024-07-16 09:45:13,329 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at saves/LLaMA2-7B-Chat/full/train_2024-07-16-09-05-28_llama2. If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training. [INFO|configuration_utils.py:953] 2024-07-16 09:45:13,333 >> loading configuration file saves/LLaMA2-7B-Chat/full/train_2024-07-16-09-05-28_llama2/generation_config.json [INFO|configuration_utils.py:1000] 2024-07-16 09:45:13,333 >> Generate config GenerationConfig { "bos_token_id": 1, "do_sample": true, "eos_token_id": 2, "max_length": 4096, "pad_token_id": 0, "temperature": 0.6, "top_p": 0.9 } [INFO|attention.py:80] 2024-07-16 09:45:13,339 >> Using torch SDPA for faster training and inference. [INFO|loader.py:196] 2024-07-16 09:45:13,344 >> all params: 6,738,415,616 [INFO|trainer.py:3788] 2024-07-16 09:45:13,449 >> ***** Running Prediction ***** [INFO|trainer.py:3790] 2024-07-16 09:45:13,449 >> Num examples = 1243 [INFO|trainer.py:3793] 2024-07-16 09:45:13,449 >> Batch size = 2 [WARNING|logging.py:328] 2024-07-16 09:45:14,110 >> We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:14 - INFO - llamafactory.model.model_utils.attention - Using torch SDPA for faster training and inference. 07/16/2024 09:45:14 - INFO - llamafactory.model.loader - all params: 6,738,415,616 07/16/2024 09:45:15 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:15 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:15 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:16 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:16 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:16 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) 07/16/2024 09:45:16 - WARNING - transformers.models.llama.modeling_llama - We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) [INFO|trainer.py:127] 2024-07-16 09:45:24,567 >> Saving prediction results to saves/LLaMA2-7B-Chat/full/eval_2024-07-16-09-05-28/generated_predictions.jsonl |