Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,11 @@ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https
|
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
-
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Intended uses & limitations
|
28 |
|
@@ -30,7 +34,27 @@ More information needed
|
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Training procedure
|
36 |
|
|
|
22 |
|
23 |
## Model description
|
24 |
|
25 |
+
Article: https://ai.plainenglish.io/fine-tuning-the-mistral-7b-instruct-v0-1-model-with-the-emotion-dataset-c84c50b553dc
|
26 |
+
|
27 |
+
Fine tunning: https://github.com/frank-morales2020/MLxDL/blob/main/FineTuning_Mistral_7b_hfdeployment_dataset_Emotion.ipynb
|
28 |
+
|
29 |
+
Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
|
|
34 |
|
35 |
## Training and evaluation data
|
36 |
|
37 |
+
Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
|
38 |
+
|
39 |
+
|
40 |
+
The following hyperparameters were used during training:
|
41 |
+
learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42
|
42 |
+
gradient_accumulation_steps: 2 total_train_batch_size: 6
|
43 |
+
optimizer: Adam with betas=(0.9,0.999)
|
44 |
+
and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
|
45 |
+
|
46 |
+
|
47 |
+
num_epochs: 1
|
48 |
+
Accuracy (Eval dataset and predict) for a sample of 2000: 59.45%
|
49 |
+
|
50 |
+
num_epochs: 25
|
51 |
+
Accuracy (Eval dataset and predict) for a sample of 2000: 79.95%
|
52 |
+
|
53 |
+
num_epochs: 40
|
54 |
+
Accuracy (Eval dataset and predict) for a sample of 2000: 80.70%
|
55 |
+
|
56 |
+
num_epochs: 40
|
57 |
+
Accuracy (Eval dataset and predict) for a sample of 2000: 80%
|
58 |
|
59 |
## Training procedure
|
60 |
|