--- license: mit base_model: microsoft/Phi-3-small-8k-instruct tags: - generated_from_trainer model-index: - name: phi3-spin-zephyr-data results: [] --- # phi3-spin-zephyr-data This model is a fine-tuned version of [microsoft/Phi-3-small-8k-instruct](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1643 - Rewards/real: -4.3165 - Rewards/generated: -36.8197 - Rewards/accuracies: 0.9626 - Rewards/margins: 32.5032 - Logps/generated: -659.6320 - Logps/real: -295.5523 - Logits/generated: -inf - Logits/real: -inf ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/real | Rewards/generated | Rewards/accuracies | Rewards/margins | Logps/generated | Logps/real | Logits/generated | Logits/real | |:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------------:|:------------------:|:---------------:|:---------------:|:----------:|:----------------:|:-----------:| | 0.3303 | 0.32 | 500 | 0.2003 | -4.8459 | -23.8426 | 0.9371 | 18.9967 | -529.8613 | -300.8461 | -inf | -inf | | 0.0933 | 0.64 | 1000 | 0.1598 | -4.6590 | -34.8525 | 0.9610 | 30.1935 | -639.9600 | -298.9768 | -inf | -inf | | 0.2065 | 0.96 | 1500 | 0.1643 | -4.3165 | -36.8197 | 0.9626 | 32.5032 | -659.6320 | -295.5523 | -inf | -inf | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2