Update README.md
Browse files
README.md
CHANGED
@@ -7,37 +7,35 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
## Dante
|
11 |
|
12 |
-
Dante
|
13 |
|
14 |

|
15 |
|
16 |
-
| Model
|
17 |
| --- | --- | --- |
|
18 |
-
| [
|
19 |
-
| [
|
20 |
-
| [
|
21 |
|
22 |
-
|
23 |
|
24 |
## Virgil Dataset
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
| subset | approximate % of tokens |
|
31 |
| --- | --- |
|
32 |
-
|
|
33 |
-
|
|
34 |
-
|
|
35 |
-
|
|
36 |
-
|
|
37 |
|
38 |
-
**Bias**:
|
39 |
|
40 |
-
|
41 |
|
42 |
---
|
43 |
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
## Dante Models: {Small, Medium, Large}
|
11 |
|
12 |
+
Dante comprises three decoder-only transformer models derived from [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1), with varying layers dropped from the original Mistral-7b model: `[15:-8]`, `[10:-3]`, and `[2:-2]` for large, medium, and small respectively.
|
13 |
|
14 |

|
15 |
|
16 |
+
| Model | Parameters | Retained Layers |
|
17 |
| --- | --- | --- |
|
18 |
+
| [Dante-Large](https://hf.co/crumbly/dante-large) | 5.1B | 23/32 |
|
19 |
+
| [Dante-Medium](https://hf.co/crumbly/dante-medium) | 3B | 13/32 |
|
20 |
+
| [Dante-Small](https://hf.co/crumbly/dante-small) | 1B | 4/32 |
|
21 |
|
22 |
+
Models were fine-tuned with high-rank adapters on a small randomized subset of high-quality web documents to ensure coherent text generation.
|
23 |
|
24 |
## Virgil Dataset
|
25 |
|
26 |
+
Virgil dataset, by Crumbly, consists of updated English text and code to fine-tune models like Dante which need to "set" their architectural changes in place. It's an efficient approach to leverage prior model knowledge instead of starting from scratch.
|
27 |
|
28 |
+
| Subset | Token % |
|
|
|
|
|
29 |
| --- | --- |
|
30 |
+
| Papers | 21.65% |
|
31 |
+
| GitHub | 35.34% |
|
32 |
+
| Books | 23.08% |
|
33 |
+
| Wiki | 3.56% |
|
34 |
+
| Webtext | 16.36% |
|
35 |
|
36 |
+
**Bias Alert**: Contains internet-sourced text including potentially offensive content. Measures should be taken to mitigate biases during inference.
|
37 |
|
38 |
+
A small 2% subset of Virgil, specifically random 1k token windows, is used to set the Dante models due to the extensive time required to train on larger datasets with Crumbly's compute setup (2xA6000 Lambdalabs Vector Workstation). The dataset isn't publicly shared.
|
39 |
|
40 |
---
|
41 |
|