Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
mergekit
Merge
chinese
arabic
english
multilingual
german
french
gagan3012/MetaModel
jeonsworld/CarbonVillain-en-10.7B-v2
jeonsworld/CarbonVillain-en-10.7B-v4
TomGrc/FusionNet_linear
DopeorNope/SOLARC-M-10.7B
VAGOsolutions/SauerkrautLM-SOLAR-Instruct
upstage/SOLAR-10.7B-Instruct-v1.0
fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
conversational
text-generation-inference
Inference Endpoints
MetaModel_moex8
This model is a Mixure of Experts (MoE) made with mergekit (mixtral branch). It uses the following base models:
- gagan3012/MetaModel
- jeonsworld/CarbonVillain-en-10.7B-v2
- jeonsworld/CarbonVillain-en-10.7B-v4
- TomGrc/FusionNet_linear
- DopeorNope/SOLARC-M-10.7B
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- upstage/SOLAR-10.7B-Instruct-v1.0
- fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
𧩠Configuration
dtype: bfloat16
experts:
- positive_prompts:
- ''
source_model: gagan3012/MetaModel
- positive_prompts:
- ''
source_model: jeonsworld/CarbonVillain-en-10.7B-v2
- positive_prompts:
- ''
source_model: jeonsworld/CarbonVillain-en-10.7B-v4
- positive_prompts:
- ''
source_model: TomGrc/FusionNet_linear
- positive_prompts:
- ''
source_model: DopeorNope/SOLARC-M-10.7B
- positive_prompts:
- ''
source_model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- positive_prompts:
- ''
source_model: upstage/SOLAR-10.7B-Instruct-v1.0
- positive_prompts:
- ''
source_model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
gate_mode: hidden
π» Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gagan3012/MetaModel_moex8"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 1,063
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.