Update README.md
Browse files
README.md
CHANGED
@@ -2500,9 +2500,14 @@ model-index:
|
|
2500 |
---
|
2501 |
|
2502 |
# Plasmoxy/bge-micro-v2-Q4_K_M-GGUF
|
|
|
|
|
|
|
2503 |
This model was converted to GGUF format from [`TaylorAI/bge-micro-v2`](https://huggingface.co/TaylorAI/bge-micro-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
2504 |
Refer to the [original model card](https://huggingface.co/TaylorAI/bge-micro-v2) for more details on the model.
|
2505 |
|
|
|
|
|
2506 |
## Use with llama.cpp
|
2507 |
Install llama.cpp through brew (works on Mac and Linux)
|
2508 |
|
@@ -2514,12 +2519,12 @@ Invoke the llama.cpp server or the CLI.
|
|
2514 |
|
2515 |
### CLI:
|
2516 |
```bash
|
2517 |
-
llama-cli --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
|
2518 |
```
|
2519 |
|
2520 |
### Server:
|
2521 |
```bash
|
2522 |
-
llama-server --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c
|
2523 |
```
|
2524 |
|
2525 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
2500 |
---
|
2501 |
|
2502 |
# Plasmoxy/bge-micro-v2-Q4_K_M-GGUF
|
2503 |
+
|
2504 |
+
Really small BGE embedding model but with 4-bit gguf quant.
|
2505 |
+
|
2506 |
This model was converted to GGUF format from [`TaylorAI/bge-micro-v2`](https://huggingface.co/TaylorAI/bge-micro-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
2507 |
Refer to the [original model card](https://huggingface.co/TaylorAI/bge-micro-v2) for more details on the model.
|
2508 |
|
2509 |
+
**!!! IMPORTANT !!! - context size is 512, specify the context size (-c 512) for llama cpp.**
|
2510 |
+
|
2511 |
## Use with llama.cpp
|
2512 |
Install llama.cpp through brew (works on Mac and Linux)
|
2513 |
|
|
|
2519 |
|
2520 |
### CLI:
|
2521 |
```bash
|
2522 |
+
llama-cli --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c 512 -p "The meaning to life and the universe is"
|
2523 |
```
|
2524 |
|
2525 |
### Server:
|
2526 |
```bash
|
2527 |
+
llama-server --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c 512
|
2528 |
```
|
2529 |
|
2530 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|