U4R
/

renqiux0302 commited on
Commit
84d7f7a
·
verified ·
1 Parent(s): fa7dca6

Delete auxiliary_decoder/README.md

Browse files
Files changed (1) hide show
  1. auxiliary_decoder/README.md +0 -64
auxiliary_decoder/README.md DELETED
@@ -1,64 +0,0 @@
1
- ---
2
- library_name: peft
3
- license: other
4
- base_model: /cpfs01/shared/ADLab/hug_ckpts/Meta-Llama-3.1-8B-Instruct
5
- tags:
6
- - llama-factory
7
- - lora
8
- - generated_from_trainer
9
- model-index:
10
- - name: sft
11
- results: []
12
- ---
13
-
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # sft
18
-
19
- This model is a fine-tuned version of [/cpfs01/shared/ADLab/hug_ckpts/Meta-Llama-3.1-8B-Instruct](https://huggingface.co//cpfs01/shared/ADLab/hug_ckpts/Meta-Llama-3.1-8B-Instruct) on the identity and the chartvlm_all_task datasets.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 0.8528
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 0.0001
41
- - train_batch_size: 1
42
- - eval_batch_size: 1
43
- - seed: 42
44
- - distributed_type: multi-GPU
45
- - num_devices: 2
46
- - gradient_accumulation_steps: 8
47
- - total_train_batch_size: 16
48
- - total_eval_batch_size: 2
49
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: cosine
51
- - lr_scheduler_warmup_ratio: 0.1
52
- - num_epochs: 3.0
53
-
54
- ### Training results
55
-
56
-
57
-
58
- ### Framework versions
59
-
60
- - PEFT 0.12.0
61
- - Transformers 4.46.1
62
- - Pytorch 2.1.2+cu121
63
- - Datasets 2.18.0
64
- - Tokenizers 0.20.3