NYCU-DL2024-ZH-Reading-Comprehension
Collection
20 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the DandinPower/ZH-Reading-Comprehension-Llama-Instruct dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.105 | 0.3690 | 250 | 0.0762 |
0.0716 | 0.7380 | 500 | 0.0897 |
0.0652 | 1.1070 | 750 | 0.0832 |
0.061 | 1.4760 | 1000 | 0.0640 |
0.0373 | 1.8450 | 1250 | 0.0813 |
0.0344 | 2.2140 | 1500 | 0.0686 |
0.0207 | 2.5830 | 1750 | 0.0662 |
0.0351 | 2.9520 | 2000 | 0.0669 |
0.0028 | 3.3210 | 2250 | 0.0996 |
0.0101 | 3.6900 | 2500 | 0.0718 |
0.0044 | 4.0590 | 2750 | 0.0825 |
0.0123 | 4.4280 | 3000 | 0.0969 |
0.0031 | 4.7970 | 3250 | 0.0924 |
Base model
meta-llama/Meta-Llama-3-8B-Instruct