YikangS commited on
Commit
f84e990
·
1 Parent(s): 689905c

update readme

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -18,10 +18,10 @@ model = AutoModelForCausalLM.from_pretrained('ibm/MoLM-350M-4B')
18
  ```
19
 
20
  **Model Details**
21
- MoLM-350M-4B is a MoE-based language models. It has 4 billion parameters, but each input token only use 350M parameteres during its inference. Thus, it's computationally equivelant to a 350M dense model.
22
- MoLM-700M-4B has 4 billion parameters and computationally equivelant to a 700M dense model.
23
- MoLM-700M-8B has 8 billion parameters and computationally equivelant to a 700M dense model.
24
- Both models are trained on 300 billion tokens from publicly available sources, with a learning rate of 3.0 x 10<sup>-4</sup> and a global batch-size of 3M tokens.
25
 
26
  **Model Developers** IBM
27
 
 
18
  ```
19
 
20
  **Model Details**
21
+ MoLM-350M-4B is a MoE-based language model. It has 4 billion parameters, but each input token only activates 350M parameters. Thus, it's computationally equivalent to a 350M dense model.
22
+ MoLM-700M-4B has 4 billion parameters and is computationally equivalent to a 700M dense model.
23
+ MoLM-700M-8B has 8 billion parameters and is computationally equivalent to a 700M dense model. All models are trained on 300 billion tokens from publicly available sources.
24
+ All models are trained on 300 billion tokens from publicly available sources, with a learning rate of 3.0 x 10<sup>-4</sup> and a global batch-size of 3M tokens.
25
 
26
  **Model Developers** IBM
27