miqu-1-70b-sf
Collection
EXL2 quants of Miqu 70b
•
12 items
•
Updated
This is a 2.75bpw EXL2 quant of 152334H/miqu-1-70b-sf
Details about the model can be found at the above model page.
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Below are the perplexity scores for the EXL2 models. A lower score is better.
Quant Level | Perplexity Score |
---|---|
5.0 | 4.2637 |
4.5 | 4.2876 |
4.0 | 4.3097 |
3.5 | 4.4459 |
3.0 | 4.6504 |
2.75 | 5.1638 |
2.5 | 5.1715 |
2.25 | 6.0848 |
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Mistral, Vicuna-v1.1 and Vicuna-v0 prompt templates. A higher score is better.
Quant Size | ChatML | Alpaca | Mistral | Vicuna-v1.1 | Vicuna-v0 |
---|---|---|---|---|---|
5.0 | 79.91 | 81.45 | 81.11 | 78.37 | 76.64 |
4.5 | 80.64 | 80.9 | 81.65 | 77.04 | 74.6 |
4.0 | 80.78 | 79.53 | 82.78 | 79.17 | 76.41 |
3.5 | 81.11 | 82.42 | 82.34 | 81.04 | 78.09 |
3.0 | 79.13 | 77.74 | 80.11 | 79.38 | 77.25 |
2.75 | 79.6 | 77.85 | 79.71 | 76.93 | 75.91 |
2.5 | 77.45 | 77.0 | 78.4 | 75.86 | 75.25 |
2.25 | 77.18 | 74.06 | 76.75 | 75.56 | 74.28 |
This was the script used for perplexity testing.
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="miqu-1-70b-sf"
BIT_PRECISIONS=(8.0 7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
MODEL_DIR="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ -d "$MODEL_DIR" ]; then
output=$(python test_inference.py -m "$MODEL_DIR" -gs 22,24 -ed data/wikitext/wikitext-2-v1.parquet)
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
fi
done
This is the script used for quantization.
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="miqu-1-70b-sf"
# Define variables
MODEL_DIR="models/152334H_miqu-1-70b-sf"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(5.0)
BIT_PRECISIONS=(8.0 7.0 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done