Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pinned: false
|
|
11 |
|
12 |
Gale comprises three decoder-only transformer models derived from [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1), with varying layers dropped from the original Mistral-7b model: `[15:-8]`, `[10:-3]`, and `[2:-2]` for large, medium, and small respectively. Models were fine-tuned with high-rank adapters on a small randomized subset of high-quality web documents to ensure coherent text generation.
|
13 |
|
14 |
-
The Crumbly 'Horizon' dataset used to train the Gale models consists of updated English text and code to fine-tune models like Gale which need to "set" their architectural changes in place. It's an efficient approach to leverage prior model knowledge instead of starting from scratch. A small 2% subset of Horizon, specifically random 1k token windows, is used to set the Gale models due to the extensive time required to train on larger datasets with Crumbly's compute setup (2xA6000 Lambdalabs Vector Workstation). The dataset isn't publicly shared.
|
15 |
|
16 |
| Model | Parameters | Retained Layers |
|
17 |
| --- | --- | --- |
|
|
|
11 |
|
12 |
Gale comprises three decoder-only transformer models derived from [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1), with varying layers dropped from the original Mistral-7b model: `[15:-8]`, `[10:-3]`, and `[2:-2]` for large, medium, and small respectively. Models were fine-tuned with high-rank adapters on a small randomized subset of high-quality web documents to ensure coherent text generation.
|
13 |
|
14 |
+
The Crumbly 'Horizon' dataset used to train the Gale models consists of updated English text and code to fine-tune models like Gale which need to "set" their architectural changes in place. It's an efficient approach to leverage prior model knowledge instead of starting from scratch. A small {2%,3%,9%} (large,medium,small) subset of Horizon, specifically random 1k token windows, is used to set the Gale models due to the extensive time required to train on larger datasets with Crumbly's compute setup (2xA6000 Lambdalabs Vector Workstation). The dataset isn't publicly shared.
|
15 |
|
16 |
| Model | Parameters | Retained Layers |
|
17 |
| --- | --- | --- |
|