elliotthwangmsa
commited on
Commit
•
54f8d38
1
Parent(s):
6e6dc6f
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,9 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
@@ -84,6 +87,10 @@ Use the code below to get started with the model.
|
|
84 |
### Training Procedure
|
85 |
|
86 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
|
|
|
|
|
|
|
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
datasets:
|
4 |
+
- elliotthwang/alpaca-zh-tw
|
5 |
+
base_model:
|
6 |
+
- unsloth/Llama-3.2-3B-Instruct
|
7 |
---
|
8 |
|
9 |
# Model Card for Model ID
|
|
|
87 |
### Training Procedure
|
88 |
|
89 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
90 |
+
Adopting Unsloth to enable 2x faster free finetuning
|
91 |
+
num_train_epochs = 4,
|
92 |
+
traing 600steps
|
93 |
+
loss 0.673900
|
94 |
|
95 |
#### Preprocessing [optional]
|
96 |
|