nthakur commited on
Commit
0133982
·
verified ·
1 Parent(s): 3b4ad8e

Model save

Browse files
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
9
+ model-index:
10
+ - name: Mistral-7B-Instruct-v0.2-multilingual-dpo-v1.0-v2
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # Mistral-7B-Instruct-v0.2-multilingual-dpo-v1.0-v2
18
+
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1328
22
+ - Rewards/chosen: -2.7014
23
+ - Rewards/rejected: -12.3976
24
+ - Rewards/accuracies: 0.9383
25
+ - Rewards/margins: 9.6962
26
+ - Logps/rejected: -1531.6849
27
+ - Logps/chosen: -609.8344
28
+ - Logits/rejected: 0.5002
29
+ - Logits/chosen: 0.3026
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 0.0002
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 4
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 3
54
+ - gradient_accumulation_steps: 2
55
+ - total_train_batch_size: 24
56
+ - total_eval_batch_size: 12
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 1
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.2695 | 0.1361 | 500 | 0.2653 | -0.4399 | -4.5379 | 0.8680 | 4.0981 | -745.7153 | -383.6803 | -1.3998 | -1.5327 |
67
+ | 0.4349 | 0.2723 | 1000 | 0.3152 | -2.6018 | -7.1212 | 0.8515 | 4.5195 | -1004.0471 | -599.8698 | 4.1724 | 4.7868 |
68
+ | 0.531 | 0.4084 | 1500 | 0.4873 | -2.4253 | -8.0681 | 0.7855 | 5.6428 | -1098.7278 | -582.2241 | -1.5195 | -1.6538 |
69
+ | 0.1681 | 0.5446 | 2000 | 0.2003 | -3.9555 | -13.1169 | 0.9089 | 9.1613 | -1603.6106 | -735.2488 | -0.1888 | -0.3742 |
70
+ | 0.1778 | 0.6807 | 2500 | 0.2004 | -3.4745 | -11.9768 | 0.9242 | 8.5023 | -1489.6012 | -687.1464 | -0.7118 | -0.9608 |
71
+ | 0.1342 | 0.8169 | 3000 | 0.1452 | -3.0928 | -12.8477 | 0.9340 | 9.7549 | -1576.6960 | -648.9738 | 0.6727 | 0.5428 |
72
+ | 0.1252 | 0.9530 | 3500 | 0.1328 | -2.7014 | -12.3976 | 0.9383 | 9.6962 | -1531.6849 | -609.8344 | 0.5002 | 0.3026 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - PEFT 0.7.1
78
+ - Transformers 4.41.2
79
+ - Pytorch 2.3.0+cu121
80
+ - Datasets 2.20.0
81
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:25dfdbc004a1ce10e3143608179364a9f6dbb9f049ba74f91c028475d106dc7c
3
  size 83946192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2afac8cf6c538dd6eefd4d61b5b934d84561eda7ca8d7d5fc1d30b679391c1d
3
  size 83946192
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9998638529611981,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.32278433931516665,
5
+ "train_runtime": 141933.118,
6
+ "train_samples": 88140,
7
+ "train_samples_per_second": 0.621,
8
+ "train_steps_per_second": 0.026
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9998638529611981,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.32278433931516665,
5
+ "train_runtime": 141933.118,
6
+ "train_samples": 88140,
7
+ "train_samples_per_second": 0.621,
8
+ "train_steps_per_second": 0.026
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff