LiqunMa commited on
Commit
d2df56b
·
verified ·
1 Parent(s): 10f9746

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -9,12 +9,25 @@ metrics:
9
  pipeline_tag: text-generation
10
  ---
11
 
12
- ## FBI-LLM-7B
 
 
13
  This work presents a Fully BInarized Large Language Model (FBI-LLM), demonstrating for the first time how to train a large-scale binary language model (not the ternary LLM like BitNet b1.58 from scratch to match the performance of its full-precision counterparts (e.g., FP16 or BF16) in transformer-based LLMs. It achieves this by employing an autoregressive distillation (AD) loss with maintaining equivalent model dimensions (130M, 1.3B, 7B) and training data volume as regular LLM pretraining, while delivering competitive results in terms of perplexity and task-specific effectiveness. Intriguingly, by analyzing the training trajectory, we find that the pretrained weight is not necessary for training binarized LLMs from scratch. This research encourages a new computational framework and may facilitate the future design of specialized hardware tailored for fully 1-bit LLMs. We make all models, code, and training dataset fully accessible and transparent to support further research.
14
 
 
 
 
 
15
  ## Tokenizer
16
  We use the same tokenizer as [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
17
 
 
 
 
 
 
 
 
18
  ## How to use
19
  Please download the code from [LiqunMa/FBI-LLM](https://github.com/LiqunMa/FBI-LLM) firstly
20
  ```python
@@ -49,4 +62,9 @@ def load_model(model_size, model_dir):
49
  param.data = param.data.to(torch.float16)
50
 
51
  return model, tokenizer
 
 
 
 
 
52
  ```
 
9
  pipeline_tag: text-generation
10
  ---
11
 
12
+ # FBI-LLM-7B
13
+ ## Model Details
14
+
15
  This work presents a Fully BInarized Large Language Model (FBI-LLM), demonstrating for the first time how to train a large-scale binary language model (not the ternary LLM like BitNet b1.58 from scratch to match the performance of its full-precision counterparts (e.g., FP16 or BF16) in transformer-based LLMs. It achieves this by employing an autoregressive distillation (AD) loss with maintaining equivalent model dimensions (130M, 1.3B, 7B) and training data volume as regular LLM pretraining, while delivering competitive results in terms of perplexity and task-specific effectiveness. Intriguingly, by analyzing the training trajectory, we find that the pretrained weight is not necessary for training binarized LLMs from scratch. This research encourages a new computational framework and may facilitate the future design of specialized hardware tailored for fully 1-bit LLMs. We make all models, code, and training dataset fully accessible and transparent to support further research.
16
 
17
+ **Input**: Models input text only.
18
+
19
+ **Output**: Models generate text only.
20
+
21
  ## Tokenizer
22
  We use the same tokenizer as [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
23
 
24
+ ## Training Data
25
+ We use [AmberDateset](https://huggingface.co/datasets/LLM360/AmberDatasets) to train our models.
26
+
27
+ ## Result
28
+
29
+ ![image](https://huggingface.co/LiqunMa/FBI-LLM_7B/blob/main/main_result.jpg)
30
+
31
  ## How to use
32
  Please download the code from [LiqunMa/FBI-LLM](https://github.com/LiqunMa/FBI-LLM) firstly
33
  ```python
 
62
  param.data = param.data.to(torch.float16)
63
 
64
  return model, tokenizer
65
+ ```
66
+
67
+ ## Citation
68
+ ### BibTeX:
69
+ ```
70
  ```