maldv's picture
Update README.md
11ae11a verified
|
raw
history blame
1.5 kB
metadata
license: cc-by-nc-4.0
tags:
  - merge
  - conversational
  - multi-task
pipeline_tag: text-generation

Winter Garden 7B - β

It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden.

An experiment

I've merged four partitions successfully in the past, so lets go for 9! I started with:

  • Mistral-7B-v0.1

and merged in

  • ZySec-7B-v1
  • LemonadeRP-4.5.3
  • dpo-binarized-NeutrixOmnibe-7B
  • Multi-Verse-RP-7B
  • AlphaMonarch-7B
  • opus-v1.2-7b
  • Kunoichi-DPO-v2-7B
  • Noromaid-7B-0.4-DPO
  • ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v2

9-partition merge

All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03.

This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention.

Other

Includes fast tokenizer.

Chat Template

I put a conversational chat template, which takes "name", "to" (optional), and "content" as the turns. It is designed to follow a transcript style chat which is used by some of the models. This type of use-case is best done by outlining a scene and creating a character card.

### {% title %}
{% metadata %}

USER: Hello

ASSISTANT: Hi, how are you?

Initial tests show this model is a real talker. If you prompt with [WP] <prompt>\n\n it will take right off.

Scores

Metric Score