Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,12 @@ quantized_by: Suparious
|
|
15 |
- Model creator: [timpal0l](https://huggingface.co/timpal0l)
|
16 |
- Original model: [Llama-3-8B-flashback-v1](https://huggingface.co/timpal0l/Llama-3-8B-flashback-v1)
|
17 |
|
|
|
18 |
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## How to use
|
21 |
|
|
|
15 |
- Model creator: [timpal0l](https://huggingface.co/timpal0l)
|
16 |
- Original model: [Llama-3-8B-flashback-v1](https://huggingface.co/timpal0l/Llama-3-8B-flashback-v1)
|
17 |
|
18 |
+

|
19 |
|
20 |
+
## Model Summary
|
21 |
+
|
22 |
+
Llama-3-8B-flashback-v1 is a continuation of the pretraining process for the base meta-llama/Meta-Llama-3-8B model, utilizing 2 251 233 forum threads from the Swedish website https://www.flashback.org/. Which is rougly 40GB of text.
|
23 |
+
It is a full finetune for three epochs.
|
24 |
|
25 |
## How to use
|
26 |
|