--- license: llama2 tags: - code llama base_model: BallisticAI/Ballistic-CodeLlama-34B-v1 inference: false model_creator: BallisticAI model_type: llama prompt_template: '### System Prompt {system_message} ### User Message {prompt} ### Assistant ' quantized_by: BallisticAI model-index: - name: Ballistic-CodeLlama-34B-v1 results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: n/a value: n/a name: n/a verified: false --- # CodeLlama 34B v1 - Model creator: [BallisticAI](https://huggingface.co/BallisticAI) - Based on: [CodeLlama 34B hf](https://huggingface.co/codellama/CodeLlama-34b-hf) - Merged with: [CodeLlama 34B v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) && [speechless-codellama-34b-v2](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0) - Additional training with: [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2) ## Description This repo contains GGUF format model files for [Ballistic-CodeLlama-34B-v1](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. ## Repositories available * [GGUF model for CPU inference.](https://huggingface.co/BallisticAI/Ballistic-CodeLlama-34B-v1-GGUF) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](BallisticAI/Ballistic-CodeLlama-34B-v1) ## How to Prompt the Model This model accepts the Alpaca/Vicuna instruction format. For example: ``` ### System Prompt You are an intelligent programming assistant. ### User Message Implement a linked list in C++ ### Assistant ... ``` ## Bias, Risks, and Limitations This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Thanks Thanks to: - The Original Llama team - [Phind](https://huggingface.co/phind) - [uukuguy](https://huggingface.co/uukuguy) - [jondurbin](https://huggingface.co/jondurbin) - And everyone else who's involved in the Open Source AI/ML Community.