yixinsong commited on
Commit
cacdca7
·
verified ·
1 Parent(s): dfa8625

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -1
README.md CHANGED
@@ -1,3 +1,63 @@
1
  ---
2
- license: mit
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ license: llama2
6
  ---
7
+
8
+
9
+
10
+ ### Background
11
+
12
+ Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
13
+ Previous work has shown that models after relufication are characterised by sparse activation, which naturally introduces a new problem: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
14
+
15
+ To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$^2$ to do more comprehensive experiments.
16
+
17
+ ### Dataset
18
+
19
+ We pretrain the model on 100 billion tokens, including:
20
+
21
+ * Refinedweb
22
+ * SlimPajama
23
+
24
+
25
+ ### Training Details
26
+
27
+ We jointly optimize the model on the conventional language modeling objective and the knowledge distillation objective. The knowledge distillation objective is to minimize the KL divergence between the teacher model and the student model. The teacher model is the original LLM, and the student model is the ReLU-activated version. Since the size of the fine-tuning data is relatively small, we introduce the knowledge distillation objective to avoid overfitting and enhance the generalization ability of the model, which can be also seen as a technique of label smoothing.
28
+
29
+ | Parameter | Value |
30
+ |-----------------------|-------------|
31
+ | Batch_Size | 4M |
32
+ | GPUs | 64xA100(80G)|
33
+ | LR_Scheduler | cosine |
34
+ | LR | 3e-4 |
35
+
36
+
37
+
38
+
39
+ ### License Disclaimer:
40
+
41
+ This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
42
+
43
+ ### Limitations & Biases:
44
+
45
+ Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
46
+
47
+ Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
48
+
49
+ ### Citiation:
50
+
51
+ Please kindly cite using the following BibTeX:
52
+
53
+ ```bibtex
54
+ @misc{sparsellm,
55
+ title={Sparse Large Language Models with ReLU Activation},
56
+ author={SpaseLLM Team},
57
+ year={2023}
58
+ }
59
+ ```
60
+
61
+ #### Acknowledgments:
62
+
63
+ The model card is modified from [ORCA_LLaMA_70B_QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA).