--- license: other datasets: - Open-Orca/OpenOrca - ehartford/wizard_vicuna_70k_unfiltered tags: - code --- # core-prompt-reverser-opt-1.3b This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4784 - Accuracy: 0.6753 ## Model description [INSTRUCTION] {your question} [RESPONSE] {model response} or [RESPONSE] {response} [REVERSED-PROMPT] {model prompt reversed} ## Intended uses & limitations More information needed ## Training and evaluation data Wizard, openOrca, custom data ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results this model is still training, it ran only 5% of the total training data, it will finish in 4/set ### Framework versions - Transformers 4.33.0.dev0 - Pytorch 2.1.0.dev20230605+cu121 - Datasets 2.14.4 - Tokenizers 0.13.3