update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,45 @@ language:
|
|
6 |
library_name: adapter-transformers
|
7 |
---
|
8 |
|
9 |
-
Based on "beomi/llama-2-ko-7b"
|
10 |
|
11 |
|
12 |
Model Architecture
|
13 |
|
14 |
OOM-7B_01 is an language model that uses an optimized transformer architecture based on Llama-2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
library_name: adapter-transformers
|
7 |
---
|
8 |
|
|
|
9 |
|
10 |
|
11 |
Model Architecture
|
12 |
|
13 |
OOM-7B_01 is an language model that uses an optimized transformer architecture based on Llama-2.
|
14 |
+
|
15 |
+
|
16 |
+
## Model description
|
17 |
+
|
18 |
+
Based on "beomi/llama-2-ko-7b"
|
19 |
+
|
20 |
+
## Intended uses & limitations
|
21 |
+
|
22 |
+
More information needed
|
23 |
+
|
24 |
+
## Training and evaluation data
|
25 |
+
|
26 |
+
More information needed
|
27 |
+
|
28 |
+
## Training procedure
|
29 |
+
|
30 |
+
### Training hyperparameters
|
31 |
+
|
32 |
+
The following hyperparameters were used during training:
|
33 |
+
- learning_rate: 2e-04
|
34 |
+
- train_batch_size: 2
|
35 |
+
- eval_batch_size: 8
|
36 |
+
- seed: 42
|
37 |
+
- gradient_accumulation_steps: 1
|
38 |
+
- total_train_batch_size:
|
39 |
+
- num_epochs: 2.0
|
40 |
+
|
41 |
+
### Training results
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
### Framework versions
|
46 |
+
|
47 |
+
- Transformers 4.37.2
|
48 |
+
- Pytorch 2.2.0+cu118
|
49 |
+
- Datasets 2.16.1
|
50 |
+
- Tokenizers 0.15.1
|