doc: update README.md
Browse files
README.md
CHANGED
@@ -1,22 +1,69 @@
|
|
1 |
---
|
|
|
2 |
language:
|
|
|
3 |
- en
|
4 |
-
license: apache-2.0
|
5 |
tags:
|
|
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
- unsloth
|
9 |
- llama
|
10 |
- trl
|
|
|
11 |
base_model: NYTK/PULI-LlumiX-32K
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
-
#
|
15 |
|
16 |
-
|
17 |
-
- **License:** apache-2.0
|
18 |
-
- **Finetuned from model :** NYTK/PULI-LlumiX-32K
|
19 |
|
20 |
-
|
|
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: llama2
|
3 |
language:
|
4 |
+
- hu
|
5 |
- en
|
|
|
6 |
tags:
|
7 |
+
- puli
|
8 |
- text-generation-inference
|
9 |
- transformers
|
10 |
- unsloth
|
11 |
- llama
|
12 |
- trl
|
13 |
+
- finetuned
|
14 |
base_model: NYTK/PULI-LlumiX-32K
|
15 |
+
datasets:
|
16 |
+
- boapps/szurkemarha
|
17 |
+
pipeline_tag: text-generation
|
18 |
---
|
19 |
|
20 |
+
# PULI LlumiX 32K instruct (6.74B billion parameter)
|
21 |
|
22 |
+
Intruct finetuned version of NYTK/PULI-LlumiX-32K.
|
|
|
|
|
23 |
|
24 |
+
## Training platform
|
25 |
+
[Lightning AI Studio](https://lightning.ai/studios) L4 GPU
|
26 |
|
27 |
+
## Hyper parameters
|
28 |
+
|
29 |
+
- Epoch: 3
|
30 |
+
- LoRA rank (r): 16
|
31 |
+
- LoRA alpha: 16
|
32 |
+
- Lr: 2e-4
|
33 |
+
- Lr scheduler: cosine
|
34 |
+
- Optimizer: adamw_8bit
|
35 |
+
- Weight decay: 0.01
|
36 |
+
|
37 |
+
## Dataset
|
38 |
+
|
39 |
+
boapps/szurkemarha
|
40 |
+
|
41 |
+
In total ~30k instructions were selected.
|
42 |
+
|
43 |
+
## Prompt template: ChatML
|
44 |
+
```
|
45 |
+
<|im_start|>system
|
46 |
+
Az alábbiakban egy feladatot leíró utasítás található. Írjál olyan választ, amely megfelelően teljesíti a kérést.<|im_end|>
|
47 |
+
<|im_start|>user
|
48 |
+
Ki a legerősebb szuperhős?<|im_end|>
|
49 |
+
<|im_start|>assistant
|
50 |
+
A legerősebb szuperhős a Marvel univerzumában Hulk.<|im_end|>
|
51 |
+
```
|
52 |
+
|
53 |
+
## Base model
|
54 |
+
|
55 |
+
- Trained with OpenChatKit [github](https://github.com/togethercomputer/OpenChatKit)
|
56 |
+
- The [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) model were continuously pretrained on Hungarian dataset
|
57 |
+
- The model has been extended to a context length of 32K with position interpolation
|
58 |
+
- Checkpoint: 100 000 steps
|
59 |
+
|
60 |
+
## Dataset for continued pretraining
|
61 |
+
|
62 |
+
- Hungarian: 7.9 billion words, documents (763K) that exceed 5000 words in length
|
63 |
+
- English: Long Context QA (2 billion words), BookSum (78 million words)
|
64 |
+
|
65 |
+
## Limitations
|
66 |
+
|
67 |
+
- max_seq_length = 32 768
|
68 |
+
- float16
|
69 |
+
- vocab size: 32 000
|