Jingmei commited on
Commit
5b93db0
1 Parent(s): 93bcef2

End of training

Browse files
README.md CHANGED
@@ -12,12 +12,10 @@ model-index:
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA_7B_trainer_Wiki_lora/runs/fm9skl5h)
16
  # PMC_LLAMA_7B_trainer_Wiki_lora
17
 
18
  This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 1.8246
21
 
22
  ## Model description
23
 
@@ -49,9 +47,6 @@ The following hyperparameters were used during training:
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 1.0899 | 0.9985 | 569 | 1.8246 |
55
 
56
 
57
  ### Framework versions
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/noc-lab/PMC_LLAMA_7B_trainer_Wiki_lora/runs/dtbm1bv9)
16
  # PMC_LLAMA_7B_trainer_Wiki_lora
17
 
18
  This model is a fine-tuned version of [chaoyi-wu/PMC_LLAMA_7B](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B) on the None dataset.
 
 
19
 
20
  ## Model description
21
 
 
47
 
48
  ### Training results
49
 
 
 
 
50
 
51
 
52
  ### Framework versions
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
- "v_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "v_proj",
24
+ "q_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6ebe2593d39d58f23a485a5ecd8f98e8bb84266885cecf02ca59b21d6da1bc65
3
  size 16794200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:010bf851e8d728cef779c2aa76071dc6d5dfba86fae2562e857b090b21a0401d
3
  size 16794200
trainer_peft.log CHANGED
@@ -279,3 +279,35 @@
279
  2024-05-30 02:08 - Setup PEFT
280
  2024-05-30 02:08 - Setup optimizer
281
  2024-05-30 02:08 - Start training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
279
  2024-05-30 02:08 - Setup PEFT
280
  2024-05-30 02:08 - Setup optimizer
281
  2024-05-30 02:08 - Start training
282
+ 2024-05-30 17:35 - Training complete!!!
283
+ 2024-05-30 19:45 - Cuda check
284
+ 2024-05-30 19:45 - True
285
+ 2024-05-30 19:45 - 1
286
+ 2024-05-30 19:45 - Configue Model and tokenizer
287
+ 2024-05-30 19:45 - Memory usage in 25.17 GB
288
+ 2024-05-30 19:45 - Dataset loaded successfully:
289
+ train-Jingmei/Pandemic_ECDC
290
+ test -Jingmei/Pandemic_WHO
291
+ 2024-05-30 19:46 - Tokenize data: DatasetDict({
292
+ train: Dataset({
293
+ features: ['input_ids', 'attention_mask'],
294
+ num_rows: 7008
295
+ })
296
+ test: Dataset({
297
+ features: ['input_ids', 'attention_mask'],
298
+ num_rows: 8264
299
+ })
300
+ })
301
+ 2024-05-30 19:49 - Split data into chunks:DatasetDict({
302
+ train: Dataset({
303
+ features: ['input_ids', 'attention_mask'],
304
+ num_rows: 103936
305
+ })
306
+ test: Dataset({
307
+ features: ['input_ids', 'attention_mask'],
308
+ num_rows: 198964
309
+ })
310
+ })
311
+ 2024-05-30 19:49 - Setup PEFT
312
+ 2024-05-30 19:49 - Setup optimizer
313
+ 2024-05-30 19:49 - Start training
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e87c7f15854d6ac2a1b9f59debb7097d4a1af8482eadaafa4a458af315f2cd5a
3
  size 5176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fb92caab4943784c256b92dc8713b57b617869aef8b999c4479dbbfa339daa4
3
  size 5176