This model draws inspiration from SOLAR, but introduces a novel approach to increasing the model's depth without the traditional method of duplicating layers. By rearranging the order of layers during inference, it maintains the advantages of depth upscaling while preserving the original parameter count. Furthermore, it undergoes additional fine-tuning using the Dolphin dataset. The foundational architecture for this experiment is based on Dolphin.

Use

# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "adalbertojunior/DUSMistral"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)

# Format message with the CHATML chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")


gen_tokens = model.generate(
    input_ids, 
    max_new_tokens=100, 
    do_sample=True, 
    temperature=0.3,
    )

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
Downloads last month
14
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Dataset used to train adalbertojunior/DUSMistral