NeMo
PyTorch
text generation
causal-lm
MaximumEntropy commited on
Commit
d0033e0
1 Parent(s): 98e5a5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md CHANGED
@@ -1,3 +1,187 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - ru
5
+ - de
6
+ - es
7
+ - fr
8
+ - ja
9
+ - it
10
+ - vi
11
+ - nl
12
+ - pl
13
+ - pt
14
+ - id
15
+ - fa
16
+ - ar
17
+ - el
18
+ - tr
19
+ - cs
20
+ - zh
21
+ - ro
22
+ - sv
23
+ - hu
24
+ - uk
25
+ - bg
26
+ - no
27
+ - hi
28
+ - fi
29
+ - da
30
+ - sk
31
+ - ko
32
+ - hr
33
+ - ca
34
+ - he
35
+ - bn
36
+ - lt
37
+ - ta
38
+ - sr
39
+ - sl
40
+ - et
41
+ - lv
42
+ - ne
43
+ - mr
44
+ - ka
45
+ - ml
46
+ - mk
47
+ - ur
48
+ - sq
49
+ - kk
50
+ - te
51
+ - hy
52
+ - az
53
+ - is
54
+ - gl
55
+ - kn
56
+ library_name: nemo
57
+ tags:
58
+ - text generation
59
+ - pytorch
60
+ - causal-lm
61
  license: cc-by-4.0
62
+
63
  ---
64
+ # NVLLM 2B
65
+
66
+ <style>
67
+ img {
68
+ display: inline;
69
+ }
70
+ </style>
71
+
72
+ |[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-2B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-Multilingual-green)](#datasets)
73
+
74
+
75
+ ## Model Description
76
+
77
+ NVLLM-GPT 2B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 2B refers to the total trainable parameter count (2 Billion) [1, 2].
78
+
79
+ This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
80
+
81
+ ## Model Architecture improvements
82
+
83
+ - The model uses the SwiGLU activation function [4]
84
+ - Rotary positional embeddings (RoPE) [5]
85
+ - Maximum sequence length of 4,096 compared to 2,048 in https://huggingface.co/nvidia/nemo-megatron-gpt-20B.
86
+ - No dropout.
87
+ - No bias terms in all linear layers.
88
+
89
+ ## Getting started
90
+
91
+ Note: You will need NVIDIA Ampere or Hopper GPUs to work with this model.
92
+
93
+ ### Step 1: Install NeMo and dependencies
94
+
95
+ You will need to install NVIDIA Apex and NeMo.
96
+
97
+ ```
98
+ git clone https://github.com/ericharper/apex.git
99
+ cd apex
100
+ git checkout nm_v1.11.0
101
+ pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
102
+ ```
103
+
104
+ ```
105
+ pip install nemo_toolkit['nlp']==1.11.0
106
+ ```
107
+
108
+ Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
109
+
110
+ ### Step 2: Launch eval server
111
+
112
+ **Note.** The example below launches a model variant with Tensor Parallelism (TP) of 1 and Pipeline Parallelism (PP) of 1 on 1 GPU.
113
+
114
+
115
+ ```
116
+ git clone https://github.com/NVIDIA/NeMo.git
117
+ cd NeMo/examples/nlp/language_modeling
118
+ git checkout v1.11.0
119
+ python megatron_gpt_eval.py gpt_model_file=nemo_gpt2B.nemo server=True tensor_model_parallel_size=1 trainer.devices=1
120
+ ```
121
+
122
+ ### Step 3: Send prompts to your model!
123
+ ```python
124
+ import json
125
+ import requests
126
+
127
+ port_num = 5555
128
+ headers = {"Content-Type": "application/json"}
129
+
130
+ def request_data(data):
131
+ resp = requests.put('http://localhost:{}/generate'.format(port_num),
132
+ data=json.dumps(data),
133
+ headers=headers)
134
+ sentences = resp.json()['sentences']
135
+ return sentences
136
+
137
+
138
+ data = {
139
+ "sentences": ["Tell me an interesting fact about space travel."]*1,
140
+ "tokens_to_generate": 50,
141
+ "temperature": 1.0,
142
+ "add_BOS": True,
143
+ "top_k": 0,
144
+ "top_p": 0.9,
145
+ "greedy": False,
146
+ "all_probs": False,
147
+ "repetition_penalty": 1.2,
148
+ "min_tokens_to_generate": 2,
149
+ }
150
+
151
+ sentences = request_data(data)
152
+ print(sentences)
153
+ ```
154
+
155
+
156
+ ## Training Data
157
+
158
+ The model was trained on 1.1T tokens obtained from publicly available data sources. The dataset comprises 53 languages and code.
159
+
160
+ ## Evaluation results
161
+
162
+ *Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
163
+
164
+ | ARC-Challenge | ARC-Easy | RACE-middle |Winogrande | RTE | BoolQA | HellaSwag | PiQA |
165
+ | ------------- | -------- | ----------- | ----------| --- | ------ | --------- | ---- |
166
+ | 0.3558 | 0.45300 | 0.3997 | 0.5801 | 0.556 | 0.5979 | 0.592 | 0.7437 |
167
+
168
+ ## Limitations
169
+
170
+ The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
171
+
172
+ ## References
173
+
174
+ [1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
175
+
176
+ [2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
177
+
178
+ [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
179
+
180
+ [4] [GLU Variants Improve Transformer](https://arxiv.org/abs/2002.05202)
181
+
182
+ [5] [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864)
183
+
184
+ ## Licence
185
+
186
+ License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
187
+