--- language: - en datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu license: other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b tags: - TensorBlock - GGUF model-index: - name: falcon-mamba-7b results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 33.36 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 19.88 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 3.63 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.05 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 10.86 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 14.47 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/falcon-mamba-7b name: Open LLM Leaderboard ---
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

## tiiuae/falcon-mamba-7b - GGUF This repo contains GGUF format model files for [tiiuae/falcon-mamba-7b](https://huggingface.co/tiiuae/falcon-mamba-7b). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
Run them on the TensorBlock client using your local machine ↗
## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [falcon-mamba-7b-Q2_K.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q2_K.gguf) | Q2_K | 2.389 GB | smallest, significant quality loss - not recommended for most purposes | | [falcon-mamba-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_S.gguf) | Q3_K_S | 3.050 GB | very small, high quality loss | | [falcon-mamba-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_M.gguf) | Q3_K_M | 3.050 GB | very small, high quality loss | | [falcon-mamba-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q3_K_L.gguf) | Q3_K_L | 3.050 GB | small, substantial quality loss | | [falcon-mamba-7b-Q4_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_0.gguf) | Q4_0 | 3.915 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [falcon-mamba-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_S.gguf) | Q4_K_S | 3.915 GB | small, greater quality loss | | [falcon-mamba-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q4_K_M.gguf) | Q4_K_M | 3.915 GB | medium, balanced quality - recommended | | [falcon-mamba-7b-Q5_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_0.gguf) | Q5_0 | 4.730 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [falcon-mamba-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_S.gguf) | Q5_K_S | 4.730 GB | large, low quality loss - recommended | | [falcon-mamba-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q5_K_M.gguf) | Q5_K_M | 4.730 GB | large, very low quality loss - recommended | | [falcon-mamba-7b-Q6_K.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q6_K.gguf) | Q6_K | 5.595 GB | very large, extremely low quality loss | | [falcon-mamba-7b-Q8_0.gguf](https://huggingface.co/tensorblock/falcon-mamba-7b-GGUF/blob/main/falcon-mamba-7b-Q8_0.gguf) | Q8_0 | 7.232 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/falcon-mamba-7b-GGUF --include "falcon-mamba-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/falcon-mamba-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```