bilkultheek commited on
Commit
33ca814
·
verified ·
1 Parent(s): d8cc57c

Model save

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.4914
22
 
23
  ## Model description
24
 
@@ -37,7 +37,7 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 1e-05
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
@@ -53,7 +53,7 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
- | 0.4582 | 4.0404 | 100 | 0.4914 |
57
 
58
 
59
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4892
22
 
23
  ## Model description
24
 
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 2e-05
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
+ | 0.4494 | 4.0404 | 100 | 0.4892 |
57
 
58
 
59
  ### Framework versions