--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- Full weight fine tuned on two epochs of [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Uses Mistral Instruct's prompt format. The base model for this came from a variation on Undi's [Mistral 11B recipe](https://huggingface.co/Undi95/Mistral-11B-v0.1). The `o_proj` and `down_proj` tensors were set to zero in the added layers, making the output exactly identical to Mistral 7B before training. Benchmarks look good locally but still evaluating actual usefulness.