--- base_model: - bunnycore/Llama-3.2-3B-TitanFusion - roger33303/Best_Model-llama3.2-3b-Instruct-Finetune-website-QnA - bunnycore/Llama-3.2-3B-Stock - bunnycore/Llama-3.2-3B-Sci-Think - noaebbot/llama3.2-3B-insights - bunnycore/Llama-3.2-3B-Prodigy - bunnycore/Llama-3.2-3B-TitanFusion-v2 - bunnycore/Llama-3.2-3B-Mix - huihui-ai/Llama-3.2-3B-Instruct-abliterated library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Llama-3.2-3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-3B-Instruct-abliterated) as a base. ### Models Merged The following models were included in the merge: * [bunnycore/Llama-3.2-3B-TitanFusion](https://huggingface.co/bunnycore/Llama-3.2-3B-TitanFusion) * [roger33303/Best_Model-llama3.2-3b-Instruct-Finetune-website-QnA](https://huggingface.co/roger33303/Best_Model-llama3.2-3b-Instruct-Finetune-website-QnA) * [bunnycore/Llama-3.2-3B-Stock](https://huggingface.co/bunnycore/Llama-3.2-3B-Stock) * [bunnycore/Llama-3.2-3B-Sci-Think](https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think) * [noaebbot/llama3.2-3B-insights](https://huggingface.co/noaebbot/llama3.2-3B-insights) * [bunnycore/Llama-3.2-3B-Prodigy](https://huggingface.co/bunnycore/Llama-3.2-3B-Prodigy) * [bunnycore/Llama-3.2-3B-TitanFusion-v2](https://huggingface.co/bunnycore/Llama-3.2-3B-TitanFusion-v2) * [bunnycore/Llama-3.2-3B-Mix](https://huggingface.co/bunnycore/Llama-3.2-3B-Mix) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: bunnycore/Llama-3.2-3B-Prodigy parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Sci-Think parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Stock parameters: density: 0.5 weight: 0.5 - model: roger33303/Best_Model-llama3.2-3b-Instruct-Finetune-website-QnA parameters: density: 0.5 weight: 0.5 - model: noaebbot/llama3.2-3B-insights parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-TitanFusion-v2 parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-TitanFusion parameters: density: 0.5 weight: 0.5 - model: bunnycore/Llama-3.2-3B-Mix parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: huihui-ai/Llama-3.2-3B-Instruct-abliterated parameters: normalize: false int8_mask: true dtype: float16 ```