--- base_model: - anthracite-org/magnum-v3-9b-chatml - grimjim/Kitsunebi-v1-Gemma2-8k-9B library_name: transformers pipeline_tag: text-generation tags: - mergekit - merge license: gemma --- # Magot-v3-Gemma2-8k-9B This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This series of models is an attempt to combine the strengths of Instruct models with the creativity of Magnum models. Tested at temp=1, minP=0.01. This attempt seems more solid than earlier versions in the series. Coherence is moderately high. Unfortunately, some recency bias remains. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [anthracite-org/magnum-v3-9b-chatml](https://huggingface.co/anthracite-org/magnum-v3-9b-chatml) * [grimjim/Kitsunebi-v1-Gemma2-8k-9B](https://huggingface.co/grimjim/Kitsunebi-v1-Gemma2-8k-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: grimjim/Kitsunebi-v1-Gemma2-8k-9B - model: anthracite-org/magnum-v3-9b-chatml merge_method: slerp base_model: grimjim/Kitsunebi-v1-Gemma2-8k-9B parameters: t: - value: 0.2 dtype: bfloat16 ```