Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Exllamav2 quant (exl2 / 2.5 bpw) made with ExLlamaV2 v0.1.1

Other EXL2 quants:

Quant Model Size lm_head
2.2
2055 MB
6
2.5
2276 MB
6
3.0
2665 MB
6
3.5
3051 MB
6
3.75
3245 MB
6
4.0
3437 MB
6
4.25
3630 MB
6
5.0
4208 MB
6
6.0
5000 MB
8
6.5
5388 MB
8
8.0
6232 MB
8

ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generation

📄 Paper🏠 Repo🤖 Models📚 Datasets

Introduction

ReflectionCoder is a novel approach that effectively leverages reflection sequences constructed by integrating compiler feedback to improve one-off code generation performance. Please refer to our paper and repo for more details!


Models

Model Checkpoint Size HumanEval (+) MBPP (+) License
ReflectionCoder-CL-7B 🤗 HF Link 7B 75.0 (68.9) 72.2 (61.4) Llama2
ReflectionCoder-CL-34B 🤗 HF Link 34B 70.7 (66.5) 68.4 (56.6) Llama2
ReflectionCoder-DS-6.7B 🤗 HF Link 6.7B 80.5 (74.4) 81.5 (69.6) DeepSeek
ReflectionCoder-DS-33B 🤗 HF Link 33B 82.9 (76.8) 84.1 (72.0) DeepSeek

Datasets

Dataset Link License
ReflectionSeq-GPT 🤗 HF Link License
ReflectionSeq-DS 🤗 HF Link License

How to Use

Chat Format

Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, i.e., <|user|>, <|assistant|>, and <|endofmessage|>. Furthermore, we use two special tokens to wrap the content of different blocks, i.e., <|text|> and <|endofblock|>. You can use the following template to prompt our ReflectionCoder.

<|user|><|text|> 
Your Instruction
<|endofblock|><|endofmessage|><|assistant|>

Inference Code

Please refer to our GitHub Repo for more technical details.

Citation

If you find this repo useful for your research, please kindly cite our paper:

@misc{ren2024reflectioncoder,
    title={ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generation}, 
    author={Houxing Ren and Mingjie Zhan and Zhongyuan Wu and Aojun Zhou and Junting Pan and Hongsheng Li},
    year={2024},
    eprint={2405.17057},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Acknowledgments

We thank the following amazing projects that truly inspired us:

Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Zoyd/SenseLLM_ReflectionCoder-DS-6.7B-2_5bpw_exl2