Llama0-3-8b-ultra-p-0.025
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5336
- Rewards/chosen: -0.9176
- Rewards/rejected: -1.7954
- Rewards/accuracies: 0.7266
- Rewards/margins: 0.8778
- Logps/rejected: -444.2013
- Logps/chosen: -348.3104
- Logits/rejected: 0.6139
- Logits/chosen: 0.4767
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6219 | 0.2060 | 100 | 0.6185 | -0.3528 | -0.5814 | 0.6797 | 0.2287 | -322.8062 | -291.8294 | 0.3035 | 0.2418 |
0.5957 | 0.4119 | 200 | 0.5909 | -0.4280 | -0.7949 | 0.6875 | 0.3669 | -344.1528 | -299.3566 | 0.2675 | 0.1885 |
0.5718 | 0.6179 | 300 | 0.5747 | -0.5426 | -1.0432 | 0.6797 | 0.5006 | -368.9808 | -310.8154 | 0.4283 | 0.3112 |
0.5604 | 0.8239 | 400 | 0.5587 | -0.5860 | -1.1683 | 0.7031 | 0.5824 | -381.4986 | -315.1523 | 0.4728 | 0.3510 |
0.5212 | 1.0299 | 500 | 0.5435 | -0.7801 | -1.5211 | 0.7344 | 0.7410 | -416.7767 | -334.5627 | 0.5275 | 0.3937 |
0.4671 | 1.2358 | 600 | 0.5421 | -0.9911 | -1.8546 | 0.7109 | 0.8635 | -450.1224 | -355.6647 | 0.6644 | 0.5299 |
0.4778 | 1.4418 | 700 | 0.5348 | -0.9156 | -1.7871 | 0.7266 | 0.8715 | -443.3745 | -348.1178 | 0.6300 | 0.4953 |
0.4791 | 1.6478 | 800 | 0.5330 | -0.9374 | -1.8128 | 0.7266 | 0.8754 | -445.9473 | -350.2950 | 0.6640 | 0.5278 |
0.4831 | 1.8538 | 900 | 0.5340 | -0.9312 | -1.8200 | 0.7266 | 0.8888 | -446.6618 | -349.6732 | 0.6213 | 0.4847 |
Framework versions
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.20.0
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.