OpenOrca-Platypus2-13B-PirateLora

This repo contains a Low-Rank Adapter (LoRA) for OpenOrca-Platypus2 13b (16 float) fit on a simple dataset comprised of thousands of pirate phrases, conversation pieces, and obscura. The purpose behind the generation of this lora was to determine whether enforcement of dialect and diction was possible through the LoRa fine tuning method. Results were much better than the previous adapter we created for Llama 2, but this may be a due to a combination of effects: the superior performance of the base model compared to Llama 2, and the higher quality training set as compared to our previous effort.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.