lmind_nq_train6000_eval6489_v1_qa_meta-llama_Llama-2-7b-chat-hf_lora2
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the tyzhu/lmind_nq_train6000_eval6489_v1_qa dataset. It achieves the following results on the evaluation set:
- Loss: 1.9837
- Accuracy: 0.5974
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.8687 | 1.0 | 187 | 1.3245 | 0.6109 |
1.2052 | 2.0 | 375 | 1.3271 | 0.6131 |
0.9568 | 3.0 | 562 | 1.4014 | 0.6095 |
0.7696 | 4.0 | 750 | 1.5195 | 0.6054 |
0.6348 | 5.0 | 937 | 1.6407 | 0.6016 |
0.5592 | 6.0 | 1125 | 1.7334 | 0.5997 |
0.5166 | 7.0 | 1312 | 1.8043 | 0.5997 |
0.4911 | 8.0 | 1500 | 1.9042 | 0.5991 |
0.4494 | 9.0 | 1687 | 1.9244 | 0.5984 |
0.4399 | 9.97 | 1870 | 1.9837 | 0.5974 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for tyzhu/lmind_nq_train6000_eval6489_v1_qa_meta-llama_Llama-2-7b-chat-hf_lora2
Base model
meta-llama/Llama-2-7b-chat-hfDataset used to train tyzhu/lmind_nq_train6000_eval6489_v1_qa_meta-llama_Llama-2-7b-chat-hf_lora2
Evaluation results
- Accuracy on tyzhu/lmind_nq_train6000_eval6489_v1_qaself-reported0.597