TheBloke's picture
Upload README.md
2ee2209
|
raw
history blame
28.3 kB
metadata
datasets:
  - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored
  - OpenAssistant/oasst1
  - shahules786/orca-best
  - argilla/databricks-dolly-15k-curated-multilingual
inference: false
language:
  - en
library_name: transformers
license: llama2
model_creator: OpenAssistant
model_link: https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10
model_name: Llama2 70B SFT v10
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
tags:
  - sft
TheBlokeAI

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


Llama2 70B SFT v10 - GGUF

Description

This repo contains GGUF format model files for OpenAssistant's Llama2 70B SFT v10.

About GGUF

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.

The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.

Here are a list of clients and libraries that are known to support GGUF:

  • llama.cpp.
  • text-generation-webui, the most widely used web UI, with many features and powerful extensions.
  • KoboldCpp, a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
  • LM Studio, an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.

Repositories available

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Compatibility

These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9

They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.

Explanation of quantisation methods

Click to see details

The new methods available are:

  • GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
  • GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
  • GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
  • GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
  • GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw

Refer to the Provided Files table below to see what files use which methods, and how.

Provided files

Name Quant method Bits Size Max RAM required Use case
llama2-70b-oasst-sft-v10.Q2_K.gguf Q2_K 2 29.28 GB 31.78 GB smallest, significant quality loss - not recommended for most purposes
llama2-70b-oasst-sft-v10.Q3_K_S.gguf Q3_K_S 3 29.92 GB 32.42 GB very small, high quality loss
llama2-70b-oasst-sft-v10.Q3_K_M.gguf Q3_K_M 3 33.19 GB 35.69 GB very small, high quality loss
llama2-70b-oasst-sft-v10.Q3_K_L.gguf Q3_K_L 3 36.15 GB 38.65 GB small, substantial quality loss
llama2-70b-oasst-sft-v10.Q4_0.gguf Q4_0 4 38.87 GB 41.37 GB legacy; small, very high quality loss - prefer using Q3_K_M
llama2-70b-oasst-sft-v10.Q4_K_S.gguf Q4_K_S 4 39.08 GB 41.58 GB small, greater quality loss
llama2-70b-oasst-sft-v10.Q4_K_M.gguf Q4_K_M 4 41.42 GB 43.92 GB medium, balanced quality - recommended
llama2-70b-oasst-sft-v10.Q5_0.gguf Q5_0 5 47.46 GB 49.96 GB legacy; medium, balanced quality - prefer using Q4_K_M
llama2-70b-oasst-sft-v10.Q5_K_S.gguf Q5_K_S 5 47.46 GB 49.96 GB large, low quality loss - recommended
llama2-70b-oasst-sft-v10.Q5_K_M.gguf Q5_K_M 5 48.76 GB 51.26 GB large, very low quality loss - recommended
llama2-70b-oasst-sft-v10.Q6_K.gguf Q6_K 6 56.59 GB 59.09 GB very large, extremely low quality loss
llama2-70b-oasst-sft-v10.Q8_0.gguf Q8_0 8 73.29 GB 75.79 GB very large, extremely low quality loss - not recommended

Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.

Q6_K and Q8_0 files are split and require joining

Note: HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.

Click for instructions regarding Q6_K and Q8_0 files

q6_K

Please download:

  • llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a
  • llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b

q8_0

Please download:

  • llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a
  • llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b

To join the files, do the following:

Linux and macOS:

cat llama2-70b-oasst-sft-v10.Q6_K.gguf-split-* > llama2-70b-oasst-sft-v10.Q6_K.gguf && rm llama2-70b-oasst-sft-v10.Q6_K.gguf-split-*
cat llama2-70b-oasst-sft-v10.Q8_0.gguf-split-* > llama2-70b-oasst-sft-v10.Q8_0.gguf && rm llama2-70b-oasst-sft-v10.Q8_0.gguf-split-*

Windows command line:

COPY /B llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a + llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b llama2-70b-oasst-sft-v10.Q6_K.gguf
del llama2-70b-oasst-sft-v10.Q6_K.gguf-split-a llama2-70b-oasst-sft-v10.Q6_K.gguf-split-b

COPY /B llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a + llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b llama2-70b-oasst-sft-v10.Q8_0.gguf
del llama2-70b-oasst-sft-v10.Q8_0.gguf-split-a llama2-70b-oasst-sft-v10.Q8_0.gguf-split-b

Example llama.cpp command

Make sure you are using llama.cpp from commit 6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9 or later.

For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.

./main -t 10 -ngl 32 -m llama2-70b-oasst-sft-v10.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"

Change -t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use -t 8. If offloading all layers to GPU, set -t 1.

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 4096 to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins

For other parameters and how to use them, please refer to the llama.cpp documentation

How to run in text-generation-webui

Further instructions here: text-generation-webui/docs/llama.cpp.md.

How to run from Python code

You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries.

How to load this model from Python using ctransformers

First install the package

# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers

Simple example code to load one of these GGUF models

from ctransformers import AutoModelForCausalLM

# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama2-70B-OASST-SFT-v10-GGUF", model_file="llama2-70b-oasst-sft-v10.q4_K_M.gguf", model_type="llama", gpu_layers=50)

print(llm("AI is going to"))

How to use with LangChain

Here's guides on using llama-cpp-python or ctransformers with LangChain:

Discord

For further support, and discussions on these models and AI in general, join us at:

TheBloke AI's Discord server

Thanks, and how to contribute.

Thanks to the chirper.ai team!

I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.

If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.

Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.

Special thanks to: Aemon Algiz.

Patreon special mentions: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser

Thank you to all my generous patrons and donaters!

And thank you again to a16z for their generous grant.

Original model card: OpenAssistant's Llama2 70B SFT v10

Open-Assistant Llama2 70B SFT v10

This model is an Open-Assistant fine-tuning of Meta's Llama2 70B LLM. It was fine-tuned in two stages, first on a mix of synthetic instrunctions and coding tasks and then in a "polishing" stage on the best human demonstrations collected at open-assistant.io up to July 23, 2023 (see Configuration Details below).

Model Details

Prompting / Prompt Template

Due to public demand (see survey) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's chatml standard prompt format. We hope that this leads to greater compatibility with chat inference/frontend applications.

Prompt dialogue template:

"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""

The model input can contain multiple conversation turns between user and assistant, e.g.

<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)

The model was partly trained with orca system messages. For inference we recommend to use the official Llama2 system message:

<|im_start|>system
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.

If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|im_end|>

Credits & Special Thanks

We want to especially thank everyone who contributed in the crowed-sourced Open-Assistant dataset creation on https://open-assistant.io/ - without you this project would not have been possible.

Ethical Considerations and Limitations

Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of llama2-70b-oasst-sft-v10 cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of llama2-70b-oasst-sft-v10, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see Meta's Responsible Use Guide.

Inference via TGI

An early version of this model had an embedding count of 32,007 which was incompatible to sharding with TGI. In the current version the embeddings and the lm_head weights have been padded to a multiple of 128 (by replicating the emembeddings of the unk-token (id: 0)). Sharded inference with TGI should now work as expected.

Configuration Details

The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: model/pretokenizer.

Stage 1 Pretokenizer Configuration

Entries of the dataset with assistant replies shorter than 25 tokens were excluded from training.

oasst_pre10_min25:
  datasets:
    - megacode2:
        fraction: 0.5
        val_split: 0.01
        max_val_set: 1000
    - orca-chat:
        val_split: 0.01
        max_val_set: 1000
    - dolly15k_multilingual:
        val_split: 0.05
        max_val_set: 300
    - oa_leet10k:
        val_split: 0.05
        max_val_set: 250
  output_dir: "output/oasst_pre10_min25"
  filename_prefix: "oasst_pre10"
  min_assistant_tokens: 25

Stage 1 dataset statistics:

# Stats for output/oasst_pre10_min25_llama2

## Stats for 'Subset of InstructionDataset (megacode2)' (466364 samples (50.0%))
-----------------
  Accepted: 398223/466364 (85.4%)
  Accepted tokens: 167676873
  Skipped: 68141 (14.6%)
  Min tokens per sample: 36
  Max tokens per sample: 11810
  Avg tokens per sample: 421.063
-----------------

## Stats for 'Subset of OrcaChat (orca-chat)' (325616 samples (100.0%))
-----------------
  Accepted: 325616/325616 (100.0%)
  Accepted tokens: 178307574
  Skipped: 0 (0.0%)
  Min tokens per sample: 105
  Max tokens per sample: 10408
  Avg tokens per sample: 547.601
-----------------

## Stats for 'Subset of Dolly15kMultilingual' (57020 samples (100.0%))
-----------------
  Accepted: 47494/57020 (83.3%)
  Accepted tokens: 13883177
  Skipped: 9526 (16.7%)
  Min tokens per sample: 34
  Max tokens per sample: 9172
  Avg tokens per sample: 292.314
-----------------

## Stats for 'Subset of InstructionDataset (oa_leet10k)' (22236 samples (100.0%))
-----------------
  Accepted: 22236/22236 (100.0%)
  Accepted tokens: 15905296
  Skipped: 0 (0.0%)
  Min tokens per sample: 168
  Max tokens per sample: 10588
  Avg tokens per sample: 715.295
-----------------

## Stats for 'total' (871236 samples (100.0%))
-----------------
  Accepted: 793569/871236 (91.1%)
  Accepted tokens: 375772920
  Skipped: 77667 (8.9%)
  Min tokens per sample: 34
  Max tokens per sample: 11810
  Avg tokens per sample: 473.523
-----------------

Stage 2 Pretokenizer Configuration

oasst_top1:
  datasets:
    - oasst_export:
        lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
        input_file_path: 2023-07-23_oasst_ready.tar.gz
        top_k: 1
        val_split: 0.05
  output_dir: "output/oasst_top1_2023-07-23"
  filename_prefix: "oasst_top1"

Stage 2 dataset statistics:

# Stats for output/oasst_top1_2023-07-23_llama2

## Stats for 'ListDataset' (11441 samples (100.0%))
-----------------
  Accepted: 11441/11441 (100.0%)
  Accepted tokens: 5315368
  Skipped: 0 (0.0%)
  Min tokens per sample: 20
  Max tokens per sample: 5407
  Avg tokens per sample: 464.58945896337735
-----------------

## Stats for 'total' (11441 samples (100.0%))
-----------------
  Accepted: 11441/11441 (100.0%)
  Accepted tokens: 5315368
  Skipped: 0 (0.0%)
  Min tokens per sample: 20
  Max tokens per sample: 5407
  Avg tokens per sample: 464.58945896337735
-----------------

Megatron Fine-Tuning Arguments for Stage 1 (Instruction Tuning):

--tensor_model_parallel_size 8
--pipeline_model_parallel_size 4
--load ./checkpoints/llama2-70b-tp8-pp4
--save ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10
--tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10/logging
--data_path ./data/oasst_pre10_min25_llama2/oasst_sft10-train
--model_name llama2
--tokenizer_type SentencePieceTokenizer
--bf16
--global_batch_size 64
--micro_batch_size 2
--vocab_file=./llama2/Llama-2-7b/tokenizer.model
--use_rms_norm
--glu_activation swiglu
--no_tie_embed_logits
--vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\""
--layernorm_epsilon 1e-5
--use_flash_attn
--no_bias_gelu_fusion
--seq_length 4096
--max_position_embeddings 4096
--log_interval 1
--save_interval 500
--eval_interval 50
--eval_iters 10
--hidden_dropout 0.0
--position_embedding_type rotary
--no_bias_dropout_fusion
--use_checkpoint_args
--train_iters 12000
--attention_dropout 0.0
--adam_beta1 0.9
--adam_beta2 0.95
--adam_eps 1e-12
--lr_decay_style cosine
--lr_warmup_iters 100
--lr 1e-5
--min_lr 1e-6
--weight_decay 0.000001
--sequence_parallel
--recompute_granularity selective
--log_timers_to_tensorboard
--rope_scaling_factor 1.0
--wandb_logger

Megatron Fine-Tuning Arguments for Stage 2 (OASST Polishing, LIMA Dropout):

--tensor_model_parallel_size 8
--pipeline_model_parallel_size 4
--load ./checkpoints/llama2-70b-tp8-pp4-oasst_pre10
--save ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10
--tensorboard_dir ./checkpoints/llama2-70b-tp8-pp4-oasst_sft10/logging
--data_path ./data/oasst_top1_2023-07-23_llama2/oasst_top1-train
--model_name llama2
--tokenizer_type SentencePieceTokenizer
--bf16
--global_batch_size 64
--micro_batch_size 2
--vocab_file=./llama2/Llama-2-7b/tokenizer.model
--use_rms_norm
--glu_activation swiglu
--no_tie_embed_logits
--vocab_extra_ids_list "\"<|im_start|>,<|im_end|>\""
--layernorm_epsilon 1e-5
--use_flash_attn
--no_bias_gelu_fusion
--seq_length 4096
--max_position_embeddings 4096
--log_interval 1
--save_interval 346
--eval_interval 50
--eval_iters 10
--hidden_dropout 0.25
--lima_dropout
--position_embedding_type rotary
--no_bias_dropout_fusion
--use_checkpoint_args
--train_iters 519
--attention_dropout 0.0
--adam_beta1 0.9
--adam_beta2 0.95
--adam_eps 1e-12
--lr_decay_style cosine
--lr_warmup_iters 100
--lr 1e-5
--min_lr 1e-6
--weight_decay 0.000001
--sequence_parallel
--recompute_granularity selective
--log_timers_to_tensorboard
--rope_scaling_factor 1.0
--finetune
--wandb_logger