--- tags: - merge - mergekit - lazymergekit - not-for-all-audiences - nsfw - rp - roleplay - role-play license: llama3 language: - en library_name: transformers pipeline_tag: text-generation base_model: - Sao10K/L3-8B-Stheno-v3.2 - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - grimjim/Llama-3-Oasis-v1-OAS-8B - failspy/Llama-3-8B-Instruct-MopeyMule --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to: - Mental illness - Self-harm - Trauma - Suicide I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably. I haven't done extensive testing yet, but the bit of testing I did do has shown noticeably good results in lessening how positive the bot is in its writing style. If you're an enjoyer of savior/reverse savior type role-plays like myself, then this bot is for you. ### Merge Method This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge. ### Models Merged The following models were included in the merge: * Casual-Autopsy/Umbral-1 * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) * Casual-Autopsy/Umbral-2 * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) * Casual-Autopsy/Umbral-3 * [grimjim/Llama-3-Oasis-v1-OAS-8B](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B) * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) ## Secret Sauce The following YAML configurations were used to produce this model: ### Umbral-1 ```yaml slices: - sources: - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0, 32] - model: failspy/Llama-3-8B-Instruct-MopeyMule layer_range: [0, 32] parameters: weight: 0.25 merge_method: task_arithmetic base_model: Sao10K/L3-8B-Stheno-v3.2 dtype: bfloat16 ``` ### Umbral-2 ```yaml slices: - sources: - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot layer_range: [0, 32] - model: failspy/Llama-3-8B-Instruct-MopeyMule layer_range: [0, 32] parameters: weight: 0.15 merge_method: task_arithmetic base_model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot dtype: bfloat16 ``` ### Umbral-3 ```yaml slices: - sources: - model: grimjim/Llama-3-Oasis-v1-OAS-8B layer_range: [0, 32] - model: failspy/Llama-3-8B-Instruct-MopeyMule layer_range: [0, 32] parameters: weight: 0.35 merge_method: task_arithmetic base_model: grimjim/Llama-3-Oasis-v1-OAS-8B dtype: bfloat16 ``` ### Umbral-Mind-RP-8B ```yaml models: - model: Casual-Autopsy/Umbral-1 - model: Casual-Autopsy/Umbral-2 - model: Casual-Autopsy/Umbral-3 merge_method: model_stock base_model: Casual-Autopsy/Umbral-1 dtype: bfloat16 ```