yixinsong commited on
Commit
25e7c15
·
verified ·
1 Parent(s): f3dd023

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -10,7 +10,10 @@ license: llama2
10
  ### Background
11
 
12
  Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
13
- Previous work has shown that models after relufication are characterised by sparse activation, which naturally introduces a new problem: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
 
 
 
14
 
15
  To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
16
 
 
10
  ### Background
11
 
12
  Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
13
+
14
+ Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
15
+
16
+ This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
17
 
18
  To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
19