relu-100B / README.md
yixinsong's picture
Update README.md
25e7c15 verified
|
raw
history blame
2.58 kB
metadata
language:
  - en
library_name: transformers
license: llama2

Background

Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).

Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.

This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.

To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.

Dataset

We pretrain the model on 100 billion tokens, including:

  • Refinedweb
  • SlimPajama

Training Hyper-parameters

Parameter Value
Batch_Size 4M
GPUs 64xA100(80G)
LR_Scheduler cosine
LR 3e-4

License Disclaimer:

This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.

Limitations & Biases:

Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

Citiation:

Please kindly cite using the following BibTeX:

@misc{sparsellm,
    title={Sparse Large Language Models with ReLU Activation}, 
    author={SpaseLLM Team},
    year={2023}
}

Acknowledgments:

The model card is modified from ORCA_LLaMA_70B_QLoRA.