PhiMiX-2x2B-raw

Code is work in progress

This is a RAW MoE meant to be finetuned

PhiMiX-2x2B is a Mixure of Experts (MoE) made with the following models using mergekit:

©️ Credits

  • mlabonne's phixtral for the PhiConfig and inference code.
  • mergekit code which I tweaked (you can find the PhiConfig here) by mainly adding the config in the moe_mixtral.py script from mixtral branch.

🧩 Configuration

base_model: rhysjones/phi-2-orange
gate_mode: random
dtype: float16
experts:
  - source_model: cognitivecomputations/dolphin-2_6-phi-2
    positive_prompts: [""]
  - source_model: rhysjones/phi-2-orange
    positive_prompts: [""]

💻 Usage

!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "paulilioaica/PhiMiX-2x2B-raw"

torch.set_default_device("cuda")

config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True)

instruction = '''
    def print_prime(n):
        """
        Print all primes between 1 and n
        """
'''


tokenizer = AutoTokenizer.from_pretrained(
    f"{model_name}", 
    trust_remote_code=True
)

# Tokenize the input string
inputs = tokenizer(
    instruction, 
    return_tensors="pt", 
    return_attention_mask=False
)

# Generate text using the model
outputs = model.generate(**inputs, max_length=200)

# Decode and print the output
text = tokenizer.batch_decode(outputs)[0]
print(text)
Downloads last month
17
Safetensors
Model size
2.15B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for paulilioaica/PhiMiX-2x2B-raw