Mistral-FastGPT-7B: AI Assistant

  • Developed by: brandon57
  • License: apache-2.0
  • Finetuned from model : unsloth/mistral-7b-instruct-v0.3-bnb-4bit

Installing the Model and Dependencies

To get started, install the unsloth library and get the latest version from the GitHub repository:

pip install unsloth
# Also get the latest nightly Unsloth!
pip uninstall unsloth -y && pip install --upgrade --no-cache-dir --no-deps git+https://github.com/unslothai/unsloth.git

Loading the Model

You can then load the fine-tuned model optimized for your language learning tasks. The model is configured for optimal performance, even with limited resources:

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "brandon57/mistral-fastgpt-7b",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)

Interacting with the Model

Once the model is ready, you can use it for real-time conversation generation. For example, to ask the chatbot to complete a Fibonacci sequence and answer general knowledge questions:

FastLanguageModel.for_inference(model) # Enable native 2x faster inference
messages = [                         # Change below!
    {"role": "user",      "content": "Continue the fibonacci sequence! Your input is 1, 1, 2, 3, 5, 8"},
    {"role": "assistant", "content": "The fibonacci sequence continues as 13, 21, 34, 55 and 89."},
    {"role": "user",      "content": "What is France's tallest tower called?"},
]
input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt = True,
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids, streamer = text_streamer, max_new_tokens = 128, pad_token_id = tokenizer.eos_token_id)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for brandon57/mistral-fastgpt-7b