--- library_name: transformers tags: - llama-3 license: cc-by-nc-4.0 --- # Badger Mushroom 4x8b I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. 8x8b seemed a bit unneccessary for how much additonal value it brougt, so I dialed it back to a 4x4b version. This model feels pretty good out of the gate, which considering how I used a non-standard merge; is a bit surprising. ``` base_model: ./maldv/badger gate_mode: hidden dtype: bfloat16 experts_per_token: 2 experts: - source_model: ./models/instruct/Llama-3-SauerkrautLM-8b-Instruct positive_prompts: negative_prompts: - source_model: ./models/instruct/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 positive_prompts: negative_prompts: - source_model: ./models/instruct/Llama-3-8B-Instruct-DPO-v0.4 positive_prompts: negative_prompts: - source_model: ./models/instruct/Poppy_Porpoise-0.72-L3-8B positive_prompts: negative_prompts: ``` ### Badger Badger is a cascading [fourier interpolation](./tensor.py#3) of the following models, with the merge order based on the layer cosine similarity: ```python [ 'opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5', 'Llama-3-SauerkrautLM-8b-Instruct', 'Llama-3-8B-Instruct-DPO-v0.4', 'Roleplay-Llama-3-8B', 'Llama-3-Lumimaid-8B-v0.1', 'Poppy_Porpoise-0.72-L3-8B', 'L3-TheSpice-8b-v0.8.3', 'Llama-3-LewdPlay-8B-evo', 'Llama-3-8B-Instruct-norefusal', 'Meta-Llama-3-8B-Instruct-DPO', 'Llama-3-Soliloquy-8B-v2' ] ``` I'm finding my iq4_xs to be working well. Llama 3 instruct format works really well, but minimal format is also highly creative. So far it performs well in each of the four areas of roleplay, logic, writing, and assistant behaviors that I've tested it in. ## Scores TBD