PsyOrca2-DARE-13b
This is a Llama 2-based model consisting of a merge between:
- KoboldAI/PsyFighter-2-13b (FP16 not available to the public yet. However, the merge config is.)
- microsoft/Orca-2-13b (with a fixed vocab size by merging on llama-2-13b)
The goal of this merge is to test out the DARE merge algorithm and see how it works with these two models.
Mergekit config (Inspired from Charles Goddard):
models:
- model: KoboldAI/Psyfighter-2-13B
parameters:
weight: 1
density: 1
- model: microsoft/Orca-2-13b
parameters:
weight: 0.05
density: 0.30
merge_method: dare_ties
base_model: meta-llama/Llama-2-13b-hf
parameters:
int8_mask: true
dtype: bfloat16
Usage
This model will most likely follow the Alpaca instruct format. It can also follow Orca ChatML due to having Orca merged in.
Alpaca:
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
- Downloads last month
- 1,089