DhiyaEddine's picture
Update README.md
57f5a57 verified
|
raw
history blame
5.96 kB
metadata
language:
  - en
tags:
  - falcon3
  - falcon3_mamba
base_model:
  - tiiuae/Falcon3-Mamba-7B-Base

Falcon3-Mamba-7B-Instruct

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B.

This repository contains the Falcon3-Mamba-7B-Instruct. It achieves ,compared to similar SSM-based models of the same size, state of art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-Mamba-7B-Instruct supports a context length up to 32K and 1 language (english).

Model Details

  • Architecture(same as Falcon-Mamba-7b)
    • Mamba1 based causal decoder only architecture trained on a causal language modeling task (i.e., predict the next token).
    • 64 decoder blocks
    • width: 4096
    • state_size: 16
    • 32k context length
    • 65k vocab size
  • Pretrained on 7 Teratokens of datasets comprising of web, code, STEM and high quality data using 2048 H100 GPU chips
  • Postrained on 1.2 million samples of STEM, conversations, code, and safety.
  • Developed by Technology Innovation Institute
  • License: TII Falcon-LLM License 2.0
  • Model Release Date: December 2024

Getting started

Click to expand
from transformers import AutoTokenizer, AutoModelForCausalLM


from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "tiiuae/Falcon3-Mamba-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many hours in one day?"
messages = [
    {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Benchmarks

We report in the following table our internal pipeline benchmarks:

Category Benchmark Zamba2-7B-instruct Jamba-1.5-Mini-instruct Qwen2-7B-Instruct Llama-3.1-8B-Instruct Falcon3-Mamba-7B-Instruct
General MMLU (5-shot) - - - 68.5% -
MMLU-PRO (5-shot) 32.4% - - 29.6% -
IFEval 69.9% - - 78.6% -
Math GSM8K (5-shot) - - - - -
MATH(4-shot) - - - - -
Reasoning Arc Challenge (25-shot) - - - - -
GPQA (0-shot) 10.3% - - 2.4% -
MUSR (0-shot) 8.2% - - 8.4% -
BBH (3-shot) 33.3% - - 29.9% -
CommonSense Understanding PIQA (0-shot) - - - - -
SciQ (0-shot) - - - - -
Winogrande (0-shot) - - - - -
OpenbookQA (0-shot) - - - - -

Citation

If Falcon3 family were helpful to your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 family of Open Models},
    author = {TII Team},
    month = {December},
    year = {2024}
}