Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ pipeline_tag: text-generation
|
|
9 |
|
10 |
This model is a fine-tuned (embeddings, lm head) version of mistralai/Mistral-7B-v0.1 on the Russian dataset (33GB). The training lasted 0.8 epochs, after which an error occurred. Was slightly additionally trained using LoRa after that.
|
11 |
|
|
|
|
|
12 |
ATTENTION!!!
|
13 |
The metrics on various datasets are slightly worse than those of the original model.
|
14 |
|
|
|
9 |
|
10 |
This model is a fine-tuned (embeddings, lm head) version of mistralai/Mistral-7B-v0.1 on the Russian dataset (33GB). The training lasted 0.8 epochs, after which an error occurred. Was slightly additionally trained using LoRa after that.
|
11 |
|
12 |
+
In short: 1) Tokenization replacement, 2) Convert to fp16, 3) Training only embeddings and lm head on 0.8 epoch, 4) Convert new layers back to bf16 and merge with original transformer in bf16, 5) Tune embeddings (modules_to_save), lm head (modules_to_save), 4 first and last layers: linear layers (lora) and layer norms(modules_to_save) on 1% of the data.
|
13 |
+
|
14 |
ATTENTION!!!
|
15 |
The metrics on various datasets are slightly worse than those of the original model.
|
16 |
|