TheBloke's picture
Update README.md
2f990e6
|
raw
history blame
4.4 kB
metadata
license: other
language:
  - en
pipeline_tag: text2text-generation
tags:
  - alpaca
  - llama
  - chat
  - gpt4
inference: false

GPT4 Alpaca Lora 30B - GPTQ 4bit 128g

This is a 4-bit GPTQ version of the Chansung GPT4 Alpaca 30B LoRA model.

It was created by merging the deltas provided in the above repo with the original Llama 30B model.

It was then quantized to 4bit, groupsize 128g, using GPTQ-for-LLaMa.

VRAM usage will depend on the tokens returned. Below approximately 1000 tokens returned it will use <24GB VRAM, but at 1000+ tokens it will exceed the VRAM of a 24GB card.

RAM and VRAM usage at the end of a 670 token response in text-generation-webui : 5.2GB RAM, 20.7GB VRAM Screenshot of RAM and VRAM Usage RAM and VRAM usage after about 1500 tokens: 5.2GB RAM, 30.0GB VRAM screenshot

If you want a model that should always stay under 24GB, use this one, provided by MetalX, instead: GPT4 Alpaca Lora 30B GPTQ 4bit without groupsize

Provided files

Currently one model file is provided, a safetensors. This file requires the latest GPTQ-for-LLaMa code to run inside oobaboogas text-generation-webui.

Tomorrow I will try to add another file that does not use --act-order and therefore can be run in text-generation-webui without needing to update GPTQ-for-LLaMa (at the cost of possibly having slightly lower inference quality.)

Details of the files provided:

  • gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors
    • safetensors format, with improved file security, created with the latest GPTQ-for-LLaMa code.
    • Command to create:
      • python3 llama.py gpt4-alpaca-lora-30B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors gpt4-alpaca-lora-30B-GPTQ-4bit-128g.safetensors

How to run in text-generation-webui

The safetensors model file was created with the latest GPTQ code, and uses --act-order to give the maximum possible quantisation quality. This means it requires that the latest GPTQ-for-LLaMa is used inside the UI.

Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:

git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa

Then install this model into text-generation-webui/models and launch the UI as follows:

cd text-generation-webui
python server.py --model gpt4-alpaca-lora-30B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.

If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:

git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install --force

Then link that into text-generation-webui/repositories as described above.

Original GPT4 Alpaca Lora model card

This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system.

  • Training script: borrowed from the official Alpaca-LoRA implementation
  • Training script:
python finetune.py \
    --base_model='decapoda-research/llama-30b-hf' \
    --data_path='alpaca_data_gpt4.json' \
    --num_epochs=10 \
    --cutoff_len=512 \
    --group_by_length \
    --output_dir='./gpt4-alpaca-lora-30b' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --batch_size=... \
    --micro_batch_size=...

You can find how the training went from W&B report here.