--- library_name: transformers tags: - pytorch datasets: - allenai/c4 language: - en base_model: - mistralai/Mistral-7B-Instruct-v0.3 --- # Model Card for Mistral-7B-Instruct-v0.3-GPTQ-4bit-gs128 <!-- Provide a quick summary of what the model is/does. --> This model has been quantized to optimize performance and reduce memory usage without compromising accuracy significantly. The quantization process was performed using GPTQ with the `GPTQConfig` class from the `transformers` library. Original Model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) Model creator: [mistralai](https://huggingface.co/mistralai) ## Quantization Configuration <!-- Provide a longer summary of what this model is. --> - Bits: 4 - Data Type: INT4 - GPTQ group size: 128 - Act Order: True - GPTQ Calibration Dataset: [C4](https://huggingface.co/datasets/allenai/c4) - Model size: 4.17GB For more details, see `quantization_config.json` ## Usage This model can be used with Transformers: ### Transformers pipeline ```python import transformers import torch model_id = "marinarosell/Mistral-7B-Instruct-v0.3-GPTQ-4bit-gs128" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] outputs = pipeline( messages, max_new_tokens=256, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][-1]) ``` ### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "marinarosell/Mistral-7B-Instruct-v0.3-GPTQ-4bit-gs128" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( input_ids, max_new_tokens=256, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Example Applications Chatbots: Lightweight conversational agents.