Dolphin Mistral Instruct

This is a custom language model created using the "SLERP" method

Models based on

The following models were used to create this language model:

Configuration

The following configuration was used to produce this model:

base_model:
- arcee-ai/sec-mistral-7b-instruct-1.6-epoch
- cognitivecomputations/dolphin-2.8-mistral-7b-v02

library_name: transformers

dtype: bfloat16

Usage

This model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "path/to/model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

input_text = "Write a short story about"
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(model.device)

output_ids = model.generate(
    input_ids,
    max_length=200,
    do_sample=True,
    top_k=50,
    top_p=0.95,
    num_return_sequences=1,
)

output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)

Make sure to replace "path/to/model" with the actual path to your model's directory.

Downloads last month
18
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for grandell1234/dolphin-mistral-instruct-7b

Finetuned
(1)
this model

Dataset used to train grandell1234/dolphin-mistral-instruct-7b