FuturisticVibes's picture
Upload folder using huggingface_hub
97e75ab verified
|
raw
history blame
3.15 kB
metadata
base_model:
  - mistralai/Mixtral-8x7B-v0.1
  - mistralai/Mixtral-8x7B-v0.1
  - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
  - KoboldAI/Mixtral-8x7B-Holodeck-v1
  - jondurbin/bagel-dpo-8x7b-v0.2
  - mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
  - mergekit
  - merge
license: apache-2.0

DonutHole-8x7B

GGUF versions here

Bagel, Mixtral Instruct, Holodeck, LimaRP.

What mysteries lie in the hole of a donut?

Good with Alpaca prompt formats, also works with Mistral format. See usage details below.

image/webp

This is similar to BagelMIsteryTour, but I've swapped out Sensualize for the new Holodeck. I'm not sure if it's better or not yet, or how it does at higher (8k+) contexts just yet.

Similar sampler advice applies as for BMT: minP (0.07 - 0.3 to taste) -> temp (either dynatemp 0-4ish, or like a temp of 3-4 with a smoothing factor of around 2.5ish). And yes, that's temp last. It does okay without rep pen up to a point, it doesn't seem to get into a complete jam, but it can start to repeat sentences, so you'll probably need some, perhaps 1.02-1.05 at a 1024 range seems okayish. (rep pen sucks, but there are better things coming).

I've mainly tested with LimaRP style Alpaca prompts (instruction/input/response), and briefly with Mistral's own format.

Full credit to all the model and dataset authors, I am but a derp with compute and a yaml file.


This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mixtral-8x7B-v0.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: mistralai/Mixtral-8x7B-v0.1
models:
  - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
    parameters:
      density: 0.5
      weight: 0.2
  - model: KoboldAI/Mixtral-8x7B-Holodeck-v1
    parameters:
      density: 0.5
      weight: 0.2
  - model: mistralai/Mixtral-8x7B-Instruct-v0.1
    parameters:
      density: 0.6
      weight: 1.0
  - model: jondurbin/bagel-dpo-8x7b-v0.2
    parameters:
      density: 0.6
      weight: 0.5
merge_method: dare_ties
dtype: bfloat16