yixinsong commited on
Commit
f3dd023
·
verified ·
1 Parent(s): cacdca7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -12,7 +12,7 @@ license: llama2
12
  Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
13
  Previous work has shown that models after relufication are characterised by sparse activation, which naturally introduces a new problem: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
14
 
15
- To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$^2$ to do more comprehensive experiments.
16
 
17
  ### Dataset
18
 
@@ -22,9 +22,8 @@ We pretrain the model on 100 billion tokens, including:
22
  * SlimPajama
23
 
24
 
25
- ### Training Details
26
 
27
- We jointly optimize the model on the conventional language modeling objective and the knowledge distillation objective. The knowledge distillation objective is to minimize the KL divergence between the teacher model and the student model. The teacher model is the original LLM, and the student model is the ReLU-activated version. Since the size of the fine-tuning data is relatively small, we introduce the knowledge distillation objective to avoid overfitting and enhance the generalization ability of the model, which can be also seen as a technique of label smoothing.
28
 
29
  | Parameter | Value |
30
  |-----------------------|-------------|
 
12
  Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
13
  Previous work has shown that models after relufication are characterised by sparse activation, which naturally introduces a new problem: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
14
 
15
+ To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
16
 
17
  ### Dataset
18
 
 
22
  * SlimPajama
23
 
24
 
25
+ ### Training Hyper-parameters
26
 
 
27
 
28
  | Parameter | Value |
29
  |-----------------------|-------------|