Winter Garden 7B - Ξ± - "Smart Assistant"
It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.
An experiment
I've merged four partitions successfully in the past, so lets go for 9! I started with:
- Mistral-7B-v0.1
and merged in
- OmniBeagleSquaredMBX-v3-7B
- ZySec-7B-v1
- Omningotex-7b-slerp
- Erosumika-7B
- LemonadeRP-4.5.3
- Thespis-Krangled-7b
- pastiche-crown-clown-7b-dare
- Snorkel-Mistral-PairRM-DPO
- multi_verse_model
9-partition merge
All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03.
This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention.
Other
Includes fast tokenizer.
Chat Template
I put a conversational chat template, which takes "name", "to" (optional), and "content" as the turns. It is designed to follow a transcript style chat which is used by some of the models. This type of use-case is best done by outlining a scene and creating a character card.
### {% title %}
{% metadata %}
USER: Hello
ASSISTANT: Hi, how are you?
It leans to being a coder when given an ### Instruction
, follows <s>[INST][/INST]
, and likes <|user|>
, <|assistant|>
as well.
A quite cheery and intelligent model. Very good with science and math, but still capable of a decent amount of creativity for a 7b model.
Scores
Metric | Score |
---|---|
Average | 66.91 |
ARC | 65.19 |
HellaSwag | 85.36 |
MMLU | 65.2 |
TruthfulQA | 50.94 |
Winogrande | 80.35 |
GSM8K | 54.44 |
- Downloads last month
- 145
Model tree for maldv/winter-garden-7b-alpha
Spaces using maldv/winter-garden-7b-alpha 6
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.190
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.360
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.200
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard50.940
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard80.350
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard54.440