puneeshkhanna
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ This repository contains the **Falcon3-3B-Base**. It achieves strong results on
|
|
20 |
Falcon3-3B-Base supports 4 languages (english, french, spanish, portuguese) and a context length up to 8K.
|
21 |
Falcon3-3B-Base pruned (depth + width) from Falcon3-7B-Base, was effeciently trained on only 100 GT using a knowledge distillation objective.
|
22 |
|
23 |
-
⚠️ **This is a raw, pretrained model, which should be further finetuned for most
|
24 |
|
25 |
## Model Details
|
26 |
- Architecture
|
@@ -29,8 +29,9 @@ Falcon3-3B-Base pruned (depth + width) from Falcon3-7B-Base, was effeciently tra
|
|
29 |
- Grouped query attention (GQA) for faster inference: 12 query heads and 4 KV heads
|
30 |
- Wider head dimension: 256
|
31 |
- High RoPE value to support long context understanding: 1000042
|
32 |
-
-
|
33 |
-
-
|
|
|
34 |
- Pruned and Healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 2048 H100 GPU chips
|
35 |
- Supports EN, FR, ES, PT
|
36 |
- Developed by [Technology Innovation Institute](https://www.tii.ae)
|
|
|
20 |
Falcon3-3B-Base supports 4 languages (english, french, spanish, portuguese) and a context length up to 8K.
|
21 |
Falcon3-3B-Base pruned (depth + width) from Falcon3-7B-Base, was effeciently trained on only 100 GT using a knowledge distillation objective.
|
22 |
|
23 |
+
⚠️ **This is a raw, pretrained model, which should be further finetuned using SFT, RLHF, continued pretraining, etc. for most use cases.**
|
24 |
|
25 |
## Model Details
|
26 |
- Architecture
|
|
|
29 |
- Grouped query attention (GQA) for faster inference: 12 query heads and 4 KV heads
|
30 |
- Wider head dimension: 256
|
31 |
- High RoPE value to support long context understanding: 1000042
|
32 |
+
- Uses SwiGLu and RMSNorm
|
33 |
+
- 8K context length
|
34 |
+
- 131K vocab size
|
35 |
- Pruned and Healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 2048 H100 GPU chips
|
36 |
- Supports EN, FR, ES, PT
|
37 |
- Developed by [Technology Innovation Institute](https://www.tii.ae)
|