File size: 3,136 Bytes
e588a7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: llama2
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: MSc_llama2_finetuned_model_secondData3
  results: []
library_name: peft
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# MSc_llama2_finetuned_model_secondData3

This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6783

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
- load_in_4bit: True
- load_in_8bit: False
### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 250

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9959        | 1.33  | 10   | 3.6441          |
| 3.376         | 2.67  | 20   | 2.9540          |
| 2.6309        | 4.0   | 30   | 2.1878          |
| 1.9511        | 5.33  | 40   | 1.7018          |
| 1.6088        | 6.67  | 50   | 1.4477          |
| 1.3311        | 8.0   | 60   | 1.1180          |
| 0.9548        | 9.33  | 70   | 0.8536          |
| 0.8083        | 10.67 | 80   | 0.8007          |
| 0.7523        | 12.0  | 90   | 0.7644          |
| 0.6984        | 13.33 | 100  | 0.7371          |
| 0.6687        | 14.67 | 110  | 0.7191          |
| 0.634         | 16.0  | 120  | 0.7079          |
| 0.6045        | 17.33 | 130  | 0.6965          |
| 0.592         | 18.67 | 140  | 0.6902          |
| 0.5726        | 20.0  | 150  | 0.6868          |
| 0.5574        | 21.33 | 160  | 0.6824          |
| 0.5398        | 22.67 | 170  | 0.6813          |
| 0.5364        | 24.0  | 180  | 0.6807          |
| 0.5288        | 25.33 | 190  | 0.6793          |
| 0.5219        | 26.67 | 200  | 0.6790          |
| 0.5217        | 28.0  | 210  | 0.6796          |
| 0.5175        | 29.33 | 220  | 0.6785          |
| 0.5117        | 30.67 | 230  | 0.6787          |
| 0.5167        | 32.0  | 240  | 0.6780          |
| 0.5136        | 33.33 | 250  | 0.6783          |


### Framework versions

- PEFT 0.4.0
- Transformers 4.38.2
- Pytorch 2.3.1+cu121
- Datasets 2.13.1
- Tokenizers 0.15.2