tinymix-8x1b-chat / README.md
eastwind's picture
Update README.md
9f171fa
|
raw
history blame
1.45 kB
metadata
license: apache-2.0
language:
  - en

TinyMix-8x1b-Chat

This is a MoE-ification of TinyLlama/TinyLlama-1.1B-Chat-v1.0 using the Mixtral branch of mergekit

The Goal was to MoE-fy the TinyLlama model and then use this as a base model to finetune from. The intuition being finetuning 8x1b should give better performance than finetuning 1b by itself.

More work coming!

Chat Template

def make_prompt(instruction):
  return f"<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n"

llm.generate(make_prompt('What is quantum tunneling?'))

Mergekit Config

base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts: [""]