Merge branch 'main' of hf.co:CallMeMrFern/Llama-2-7b-chat-hf_vn into main
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
base_model: meta-llama/Llama-2-7b-chat-hf
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
@@ -23,7 +25,7 @@ base_model: meta-llama/Llama-2-7b-chat-hf
|
|
23 |
- **Model type:** [More Information Needed]
|
24 |
- **Language(s) (NLP):** [More Information Needed]
|
25 |
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
@@ -236,4 +238,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
236 |
### Framework versions
|
237 |
|
238 |
|
239 |
-
- PEFT 0.6.2.dev0
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
base_model: meta-llama/Llama-2-7b-chat-hf
|
4 |
+
language:
|
5 |
+
- vi
|
6 |
---
|
7 |
|
8 |
# Model Card for Model ID
|
|
|
25 |
- **Model type:** [More Information Needed]
|
26 |
- **Language(s) (NLP):** [More Information Needed]
|
27 |
- **License:** [More Information Needed]
|
28 |
+
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-chat-hf]
|
29 |
|
30 |
### Model Sources [optional]
|
31 |
|
|
|
238 |
### Framework versions
|
239 |
|
240 |
|
241 |
+
- PEFT 0.6.2.dev0
|