BAAI/bge-m3 in GGUF format

original: https://huggingface.co/BAAI/bge-m3

quantization:

REL=b3827  # can change to a later release
wget https://github.com/ggerganov/llama.cpp/releases/download/$REL/llama-$REL-bin-ubuntu-x64.zip --content-disposition --continue &> /dev/null
wget https://github.com/ggerganov/llama.cpp/archive/refs/tags/$REL.zip                           --content-disposition --continue &> /dev/null
unzip -q llama-$REL-bin-ubuntu-x64.zip
unzip -q llama.cpp-$REL.zip
mv llama.cpp-$REL/* .
rm -r llama.cpp-$REL/ llama-$REL-bin-ubuntu-x64.zip llama.cpp-$REL.zip
pip install -q -r requirements.txt

rm -rf models/tmp/
git clone --depth=1 --single-branch https://huggingface.co/BAAI/bge-m3 models/tmp
python convert_hf_to_gguf.py models/tmp/ --outfile model-f32.gguf --outtype f32

build/bin/llama-quantize model-f32.gguf model-f16.gguf    f16    2> /dev/null
build/bin/llama-quantize model-f32.gguf model-bf16.gguf   bf16   2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q8_0.gguf   q8_0   2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q6_k.gguf   q6_k   2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q5_k_m.gguf q5_k_m 2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q5_k_s.gguf q5_k_s 2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q4_k_m.gguf q4_k_m 2> /dev/null
build/bin/llama-quantize model-f32.gguf model-q4_k_s.gguf q4_k_s 2> /dev/null

rm -rf models/yolo/
mkdir -p models/yolo
mv model-*.gguf models/yolo/
touch models/yolo/README.md
huggingface-cli upload bge-m3-gguf models/yolo .

usage:

build/bin/llama-embedding -m model-q5_k_m.gguf -p "Cô ấy cười nói suốt cả ngày" --embd-output-format array 2> /dev/null
# OR
build/bin/llama-server --embedding -c 8192 -m model-q5_k_m.gguf
Downloads last month
17
GGUF
Model size
567M params
Architecture
bert

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for doof-ferb/bge-m3-gguf

Base model

BAAI/bge-m3
Quantized
(19)
this model

Collection including doof-ferb/bge-m3-gguf