--- base_model: - inflatebot/thorn-0.5 - inflatebot/thorn-0.35 - inflatebot/thorn-0.55 - inflatebot/thorn-0.45 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ![Made with NovelAI](https://huggingface.co/inflatebot/guns-and-roses-r1/resolve/main/guns%20and%20roses.png) `Quickest draw in the West.` A re-application of the Helium-3 process to Mistral Nemo analogues. Experimental (as you can tell by it having a revision number, I'll be playing with this more in coming time.) Based ultimately on [Magnum-12B-V2](https://huggingface.co/anthracite-org/magnum-12b-v2) and [MN-12B-Rosier-v1](https://huggingface.co/Fizzarolli/MN-12b-Rosier-v1). Q4 and Q6 quants available from [Reiterate3680](https://huggingface.co/Reiterate3680/guns-and-roses-r1-GGUF/tree/main) while we wait on The Usual Suspects ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [inflatebot/thorn-0.35](https://huggingface.co/inflatebot/thorn-0.35) as a base. ### Models Merged The following models were included in the merge: * [inflatebot/thorn-0.5](https://huggingface.co/inflatebot/thorn-0.5) * [inflatebot/thorn-0.55](https://huggingface.co/inflatebot/thorn-0.55) * [inflatebot/thorn-0.45](https://huggingface.co/inflatebot/thorn-0.45) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: inflatebot/thorn-0.5 - model: inflatebot/thorn-0.45 - model: inflatebot/thorn-0.55 merge_method: model_stock base_model: inflatebot/thorn-0.35 dtype: bfloat16 ```