Upload folder using huggingface_hub (#1)
Browse files- a66237edacca956c892c30defef81b9f8edd81f5897995f9852dc0ba6d2dcc32 (44f5c964576b321d2fa636f3cb922ec1ef96a5ea)
- d5fcd292dd169b2e64c950eb655318e5ebc87fc6a2606e5fc2825e6ce67d61db (02a0a17db354a4223a2a77b844095f93456a7c81)
- ce111a8fc286b2aed0725f13f20eb55dc70c9da429fc1d005de4177575659adc (a1d5fbebe16e830d41440c4a3043990f688e2746)
- 3d5b53402b6f71df176cd055645a85c9964845adde02964dbeacfbe99bb312b8 (fc7de92dc81e0bd494821c01fec8a3e238b1fd00)
- c1d215c8cde2d8ab519f9474bc856076e94622a16c383593648e2b45cc5754c7 (1c645f9aa0311ad89b0ecb0b189183c793ee8284)
- bfc309a3a67deab218e38d6999fb0f4185187833f4aeee1debb70a83b8874af5 (0843c4205537da36515e8162ef97d73a6664f62e)
- .gitattributes +5 -0
- README.md +107 -0
- test.log +4 -0
- vietnamese-llama2-7b-120GB_Q3_K_M.gguf +3 -0
- vietnamese-llama2-7b-120GB_Q4_K_M.gguf +3 -0
- vietnamese-llama2-7b-120GB_Q5_K_M.gguf +3 -0
- vietnamese-llama2-7b-120GB_Q6_K.gguf +3 -0
- vietnamese-llama2-7b-120GB_Q8_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
vietnamese-llama2-7b-120GB_Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
vietnamese-llama2-7b-120GB_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
vietnamese-llama2-7b-120GB_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
vietnamese-llama2-7b-120GB_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
vietnamese-llama2-7b-120GB_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,107 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
datasets:
|
4 |
+
- vietgpt/wikipedia_vi
|
5 |
+
- wikipedia
|
6 |
+
- pg19
|
7 |
+
- mc4
|
8 |
+
language:
|
9 |
+
- vi
|
10 |
+
- en
|
11 |
+
---
|
12 |
+
|
13 |
+
<img src="https://github.com/bkai-research/Vietnamese-LLaMA-2/raw/main/banner.png" width="800"/>
|
14 |
+
|
15 |
+
### Github: [https://github.com/bkai-research/Vietnamese-LLaMA-2](https://github.com/bkai-research/Vietnamese-LLaMA-2)
|
16 |
+
|
17 |
+
### Tokenizer
|
18 |
+
We enhance our previous tokenizer in [vietnamese-llama2-7b-40GB](https://huggingface.co/bkai-foundation-models/vietnamese-llama2-7b-40GB) by training [SentencePiece](https://github.com/google/sentencepiece) on a more extensive collection of clean Vietnamese documents spanning diverse domains such as news, books, stock, finance, and laws. In contrast to the previous version, we follow the original LLaMA-2 paper to split all numbers into individual digits. Again, the updated tokenizer markedly enhances the encoding of Vietnamese text, cutting down the number of tokens by 50% compared to ChatGPT and approximately 70% compared to the original Llama2.
|
19 |
+
|
20 |
+
### Pretraining data
|
21 |
+
Here are our data sources:
|
22 |
+
- 53 GB NewsCorpus (clean + dedup [binhvq's NewsCorpus](https://github.com/binhvq/news-corpus) combined with our self-crawled data up to October 2023). Thanks [iambestfeed](https://huggingface.co/iambestfeed) for his great work in crawling news data.
|
23 |
+
- 1.3 GB Vietnamese Wikipedia (updated to October 2023)
|
24 |
+
- 8.5 GB [Vietnamese books](https://www.kaggle.com/datasets/iambestfeeder/10000-vietnamese-books)
|
25 |
+
- 4.8 GB Vietnamese legal documents (clean and dedup)
|
26 |
+
- 1.6 GB stock news (clean and dedup)
|
27 |
+
- 43 GB Vietnamese text (subsampled from [Culturax-vi](https://huggingface.co/papers/2309.09400))
|
28 |
+
- 2.3 GB English Books (sub-sampled from [pg19](https://huggingface.co/datasets/pg19))
|
29 |
+
- 2.2 GB English Wikipedia
|
30 |
+
- 16 GB English text (subsampled from [Culturax-en](https://huggingface.co/papers/2309.09400))
|
31 |
+
|
32 |
+
We then merge all data sources and perform the last deduplication, resulting in a final pretraining dataset of 124 GB, including 104 GB of Vietnamese text and 20 GB of English text.
|
33 |
+
|
34 |
+
### Continual pretraining
|
35 |
+
We conduct a single-epoch continual pretraining using the Llama2-7B model.
|
36 |
+
|
37 |
+
We trained the model on a DGX A100 system, utilizing four GPU A100 in 40 days (about 4000 GPU hours).
|
38 |
+
|
39 |
+
Hyperparameters are set as follows:
|
40 |
+
- Training Regime: BFloat16 mixed precision
|
41 |
+
- Lora Config:
|
42 |
+
|
43 |
+
```
|
44 |
+
{
|
45 |
+
"base_model_name_or_path": "meta-llama/Llama-2-7b-hf",
|
46 |
+
"bias": "none",
|
47 |
+
"enable_lora": null,
|
48 |
+
"fan_in_fan_out": false,
|
49 |
+
"inference_mode": true,
|
50 |
+
"lora_alpha": 32.0,
|
51 |
+
"lora_dropout": 0.05,
|
52 |
+
"merge_weights": false,
|
53 |
+
"modules_to_save": [
|
54 |
+
"embed_tokens",
|
55 |
+
"lm_head"
|
56 |
+
],
|
57 |
+
"peft_type": "LORA",
|
58 |
+
"r": 8,
|
59 |
+
"target_modules": [
|
60 |
+
"q_proj",
|
61 |
+
"v_proj",
|
62 |
+
"k_proj",
|
63 |
+
"o_proj",
|
64 |
+
"gate_proj",
|
65 |
+
"down_proj",
|
66 |
+
"up_proj"
|
67 |
+
],
|
68 |
+
"task_type": "CAUSAL_LM"
|
69 |
+
}
|
70 |
+
|
71 |
+
```
|
72 |
+
We also provide the [LoRA part](https://huggingface.co/bkai-foundation-models/vietnamese-llama2-7b-120GB/tree/main/pt_lora_model) so that you can integrate it with the original Llama2-7b by yourself.
|
73 |
+
|
74 |
+
|
75 |
+
Please note that **this model requires further supervised fine-tuning (SFT)** to be used in practice!
|
76 |
+
|
77 |
+
Usage and other considerations: We refer to the [Llama 2](https://github.com/facebookresearch/llama)
|
78 |
+
|
79 |
+
### Training loss
|
80 |
+
The red line indicates the learning curve of [vietnamese-llama2-7b-40GB](https://huggingface.co/bkai-foundation-models/vietnamese-llama2-7b-40GB), while the cyan one corresponds to the new model of 120 GB.
|
81 |
+
<img src="https://github.com/bkai-research/Vietnamese-LLaMA-2/raw/main/plot.png" alt="Training Loss Curve"/>
|
82 |
+
|
83 |
+
### Disclaimer
|
84 |
+
|
85 |
+
This project is built upon Meta's Llama-2 model. It is essential to strictly adhere to the open-source license agreement of Llama-2 when using this model. If you incorporate third-party code, please ensure compliance with the relevant open-source license agreements.
|
86 |
+
It's important to note that the content generated by the model may be influenced by various factors, such as calculation methods, random elements, and potential inaccuracies in quantification. Consequently, this project does not offer any guarantees regarding the accuracy of the model's outputs, and it disclaims any responsibility for consequences resulting from the use of the model's resources and its output.
|
87 |
+
For those employing the models from this project for commercial purposes, developers must adhere to local laws and regulations to ensure the compliance of the model's output content. This project is not accountable for any products or services derived from such usage.
|
88 |
+
|
89 |
+
### Acknowledgments
|
90 |
+
|
91 |
+
We extend our gratitude to PHPC - Phenikaa University and NVIDIA for their generous provision of computing resources for model training. Our appreciation also goes out to [binhvq](https://github.com/binhvq/news-corpus), [iambestfeed](https://huggingface.co/iambestfeed) and the other authors for their diligent efforts in collecting and preparing the Vietnamese text corpus.
|
92 |
+
|
93 |
+
### Please cite our manuscript if this dataset is used for your work
|
94 |
+
```
|
95 |
+
@article{duc2024towards,
|
96 |
+
title={Towards Comprehensive Vietnamese Retrieval-Augmented Generation and Large Language Models},
|
97 |
+
author={Nguyen Quang Duc, Le Hai Son, Nguyen Duc Nhan, Nguyen Dich Nhat Minh, Le Thanh Huong, Dinh Viet Sang},
|
98 |
+
journal={arXiv preprint arXiv:2403.01616},
|
99 |
+
year={2024}
|
100 |
+
}
|
101 |
+
```
|
102 |
+
|
103 |
+
|
104 |
+
***
|
105 |
+
|
106 |
+
Quantization of Model [bkai-foundation-models/vietnamese-llama2-7b-120GB](https://huggingface.co/bkai-foundation-models/vietnamese-llama2-7b-120GB).
|
107 |
+
Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
|
test.log
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
What is a Large Language Model?
|
2 |
+
lởngrunner.com
|
3 |
+
What Is A Large Language Model (LLM)? - 2021-09-07T15:34:08.697Z
|
4 |
+
A large language model (LLM) is an artificial intelligence system that uses machine learning to generate human-like text based on a given input. LLMs are trained using vast amounts of data, and they can be used for tasks such as natural language processing, speech recognition, and machine translation. In this article, we will explore what large language models are, how they work, and some
|
vietnamese-llama2-7b-120GB_Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1ac0f3136db6fbdc8ac8c0aaee162f2cebd6304c281fc5047a70834b18e05a39
|
3 |
+
size 3367174400
|
vietnamese-llama2-7b-120GB_Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:243c2ef5d36880b05a3edabfbaadd08d1c30b73d92a4487f4b1fa740f5bdcf05
|
3 |
+
size 4157490208
|
vietnamese-llama2-7b-120GB_Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aadceda99c8603260d63e2efed792d81da99414b1685b50a5e5020ed8b3b3bb2
|
3 |
+
size 4866528800
|
vietnamese-llama2-7b-120GB_Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc1356b2f2aaa82d4f63a5c5503f00121c595c4e3be982d0e9026ce9e93f292d
|
3 |
+
size 5619882304
|
vietnamese-llama2-7b-120GB_Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5768282ad65dc971e6b9d873af5924398560f5916a336ea3501b84332c33a189
|
3 |
+
size 7278460672
|