TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


Salesforce's Codegen 2.5 7B Mono GPTQ

These files are GPTQ model files for Salesforce's Codegen 2.5 7B Mono.

Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.

These files were quantised using hardware kindly provided by Latitude.sh.

Repositories available

Prompt template: custom

Please install OpenAI tiktoken for the tokenizer.

pip install tiktoken==0.4.0

Causal sampling (code autocompletion)

For regular causal sampling, simply generate completions given the context:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-mono", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-mono")

text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))

Provided files

Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.

Each separate quant is in a different branch. See below for instructions on fetching from different branches.

Branch Bits Group Size Act Order (desc_act) File Size ExLlama Compatible? Made With Description
main 4 128 False 4.21 GB False AutoGPTQ Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options.
gptq-4bit-32g-actorder_True 4 32 True 4.59 GB False AutoGPTQ 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed.
gptq-4bit-64g-actorder_True 4 64 True 4.34 GB False AutoGPTQ 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed.
gptq-4bit-128g-actorder_True 4 128 True 4.21 GB False AutoGPTQ 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed.
gptq-8bit--1g-actorder_True 8 None True 7.33 GB False AutoGPTQ 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed.
gptq-8bit-128g-actorder_False 8 128 False 7.47 GB False AutoGPTQ 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed.

How to download from branches

  • In text-generation-webui, you can add :branch to the end of the download name, eg TheBloke/Codegen25-7B-mono-GPTQ:gptq-4bit-32g-actorder_True
  • With Git, you can clone a branch with:
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Codegen25-7B-mono-GPTQ`
  • In Python Transformers code, the branch is the revision parameter; see below.

How to easily download and use this model in text-generation-webui.

Please make sure you're using the latest version of text-generation-webui.

It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.

Please remember to install tiktoken, as listed above.

You must also tick "Trust Remote Code", which means it's not compatible with ExLlama.

  1. Click the Model tab.
  2. Under Download custom model or LoRA, enter TheBloke/Codegen25-7B-mono-GPTQ.
  • To download from a specific branch, enter for example TheBloke/Codegen25-7B-mono-GPTQ:gptq-4bit-32g-actorder_True
  • see Provided Files above for the list of branches for each option.
  1. Click Download.
  2. The model will start downloading. Once it's finished it will say "Done"
  3. In the top left, click the refresh icon next to Model.
  4. Make sure Loader is set to AutoGPTQ or GPTQ-for-LLaMa, and that Trust Remote Code is ticked.
  5. In the Model dropdown, choose the model you just downloaded: Codegen25-7B-mono-GPTQ
  6. The model will automatically load, and is now ready for use!
  7. To save your settings (Loader and Trust Remote Code), click Save settings for this model followed by Reload the Model in the top right.
  • Note that you do not need to set GPTQ parameters any more. These are set automatically from the file quantize_config.json.
  1. Once you're ready, click the Text Generation tab and enter a prompt to get started!

How to use this GPTQ model from Python code

First make sure you have AutoGPTQ installed:

GITHUB_ACTIONS=true pip install auto-gptq

pip install tiktoken==0.4.0

Then try the following example code:

from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig

model_name_or_path = "TheBloke/Codegen25-7B-mono-GPTQ"
model_basename = "model"

use_triton = False

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
        model_basename=model_basename
        use_safetensors=True,
        trust_remote_code=True,
        device="cuda:0",
        use_triton=use_triton,
        quantize_config=None)

"""
To download from a specific branch, use the revision parameter, as in this example:

model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
        revision="gptq-4bit-32g-actorder_True",
        model_basename=model_basename,
        use_safetensors=True,
        trust_remote_code=True,
        device="cuda:0",
        quantize_config=None)
"""

prompt = "def hello_world()"
prompt_template=f'''{prompt}'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.95,
    repetition_penalty=1.15
)

print(pipe(prompt_template)[0]['generated_text'])

Compatibility

The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.

ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: Salesforce's Codegen 2.5 7B Mono

CodeGen2.5-7B-mono

Title: CodeGen2.5: Small, but mighty

Authors: Erik Nijkamp*, Hiroaki Hayashi*, Yingbo Zhou, Caiming Xiong

(* equal contribution)

Model description

CodeGen2.5 is a family of autoregressive language models for program synthesis.

Building upon CodeGen2, the model is trained on StarCoderData for 1.4T tokens, achieving competitive results compared to StarCoderBase-15.5B with less than half the size.

Like CodeGen2, this model is capable of infilling, and supports multiple programming languages.

We then further train on Python, then on instruction data. We release all the models as follows:

  • CodeGen2.5-7B-multi: Trained on StarCoderData. Licensed under Apache-2.0.
  • CodeGen2.5-7B-mono (this repo): Further trained on additional Python tokens. Licensed under Apache-2.0.
  • CodeGen2.5-7B-instruct: Further trained from CodeGen2.5-7B-mono on instruction data. Research purposes only.

How to use

This model can be easily loaded using the AutoModelForCausalLM functionality.

Pre-requisite

Please install OpenAI tiktoken for the tokenizer.

pip install tiktoken==0.4.0

Causal sampling (code autocompletion)

For regular causal sampling, simply generate completions given the context:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-mono", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-mono")

text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))

Infill sampling

For infill sampling, we follow the CodeGen2 format:

  • <mask_N>: N-th span to be masked. In practice, use <mask_1> to where you want to sample infill.
  • <sep>: Separator token between the suffix and the infilled sample. See below.
  • <eom>: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.

For example, if we want to generate infill for the following cursor position of a function:

def hello_world():
    |
    return name

we construct an input to the model by

  1. Inserting <mask_1> token in place of cursor position
  2. Append <sep> token to indicate the boundary
  3. Insert another <mask_1> to indicate which mask we want to infill.

The final snippet looks as follows:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen25-7b-mono", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen25-7b-mono")


def format(prefix, suffix):
  return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"


prefix = "def hello_world():\n    "
suffix = "    return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])

You might want to truncate the model output with <eom>.

Evaluation results

We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the blog for more details.

Intended use and limitations

As an autoregressive language model, CodeGen2.5 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at program synthesis, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.

Attribution & Other Requirements

The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. The data provider BigCode provides a search index that lets you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.

BibTeX entry and citation info

Please cite CodeGen2 paper:

@article{Nijkamp2023codegen2,
  title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
  author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
  journal={arXiv preprint},
  year={2023}
}
Downloads last month
21
Safetensors
Model size
1.29B params
Tensor type
F32
·
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train TheBloke/Codegen25-7B-mono-GPTQ