BallisticAI's picture
Upload 13 files
6987dbf
metadata
license: llama2
tags:
  - code llama
base_model: BallisticAI/Ballistic-CodeLlama-34B-v1
inference: false
model_creator: BallisticAI
model_type: llama
prompt_template: |
  ### System Prompt
  {system_message}

  ### User Message
  {prompt}

  ### Assistant
quantized_by: BallisticAI
model-index:
  - name: Ballistic-CodeLlama-34B-v1
    results:
      - task:
          type: text-generation
        dataset:
          name: HumanEval
          type: openai_humaneval
        metrics:
          - type: n/a
            value: n/a
            name: n/a
            verified: false

CodeLlama 34B v1

Description

This repo contains GGUF format model files for Ballistic-CodeLlama-34B-v1.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.

It is also now supported by continuous batching server vLLM, allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.

Repositories available

How to Prompt the Model

This model accepts the Alpaca/Vicuna instruction format.

For example:

### System Prompt
You are an intelligent programming assistant.

### User Message
Implement a linked list in C++

### Assistant
...

Bias, Risks, and Limitations

This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.

Thanks

Thanks to:

  • The Original Llama team
  • Phind
  • uukuguy
  • jondurbin
  • And everyone else who's involved in the Open Source AI/ML Community.