YAML Metadata Error: "datasets[3]" with value "teknium1/GPTeacher/roleplay-instruct-v2-final" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "datasets[4]" with value "teknium1/GPTeacher/codegen-isntruct" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "datasets[7]" with value "project-baize/baize-chatbot/medical_chat_data" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "datasets[8]" with value "project-baize/baize-chatbot/quora_chat_data" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "datasets[9]" with value "project-baize/baize-chatbot/stackoverflow_chat_data" is not valid. If possible, use a dataset id from https://hf.co/datasets.

MPT-7B-Chat-8k

License: CC-By-NC-SA-4.0 (non-commercial use only)

How to Use

You need auto-gptq installed to run the following: pip install auto-gptq

Example script:

from auto_gptq import AutoGPTQForCausalLM
from transformers import AutoTokenizer, TextGenerationPipeline, TextStreamer

quantized_model = "casperhansen/mpt-7b-8k-chat-gptq"

print('loading model...')

# load quantized model to the first GPU
tokenizer = AutoTokenizer.from_pretrained(quantized_model, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(quantized_model, device="cuda:0", trust_remote_code=True)

prompt_format = """<|im_start|>system
A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant

"""

prompt = prompt_format.format(text="What is the difference between nuclear fusion and fission?")

print('generating...')

# or you can also use pipeline
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
tokens = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**tokens, max_length=512, streamer=streamer, eos_token_id=tokenizer.eos_token_id)
Downloads last month
17
Inference Examples
Inference API (serverless) has been turned off for this model.