ybelkada's picture
Update README.md
8269793 verified
|
raw
history blame
4.13 kB
metadata
base_model: tiiuae/Falcon3-10B-Instruct
library_name: transformers
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
  - bitnet
  - falcon3

image/png

Table of Contents

  1. TL;DR
  2. Model Details
  3. Training Details
  4. Usage
  5. Evaluation
  6. Citation

TL;DR

Model Details

Model Description

  • Developed by: https://www.tii.ae
  • Model type: Causal decoder-only - instruct / chat version
  • Architecture: Pure-transformer - 1.58bit version
  • Language(s) (NLP): Mainly English
  • License: TII Falcon License 2.0

Training details

The model has been trained following the training strategies from the recent 1-bit LLM HF blogpost and 1-bit LLM paper. For more details about the training protocol of this model, please refer to the Falcon-3 technical report, section Compression.

Usage

Currently to use this model you can either rely on Hugging Face transformers library or BitNet library. You can also play with the model using the falcon-1.58bit playground (only for the 7B instruct version).

🤗 transformers

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "tiiuae/Falcon3-7B-Instruct-1.58bit"

model = AutoModelForCausalLM.from_pretrained(
  model_id,
  torch_dtype=torch.bfloat16,
).to("cuda")

# Perform text generation

BitNet

git clone https://github.com/microsoft/BitNet && cd BitNet
pip install -r requirements.txt
python setup_env.py --hf-repo tiiuae/Falcon3-10B-Instruct-1.58bit -q i2_s
python run_inference.py -m models/Falcon3-10B-1.58bit/ggml-model-i2_s.gguf -p "You are a helpful assistant" -cnv

Evaluation

We report in the following table our internal pipeline benchmarks:

Note evaluation results are normalized score from v2 leaderboard tasks - reported results of original models in the blogpost are raw scores

Benchmark Llama3-8B-1.58-100B-tokens Falcon3-10B-Instruct-1.58bit
IFEval 17.91 54.37
MUSR 4.87 2.57
GPQA 1.83 4.27
BBH 5.36 6.59
MMLU-PRO 2.78 6.62
MATH 0.26 2.44
Average 5.5 12.81

Useful links

Citation

If the Falcon3 family of models were helpful to your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 Family of Open Models},
    author = {Falcon-LLM Team},
    month = {December},
    year = {2024}
}