Update README.md
Browse files
README.md
CHANGED
@@ -77,6 +77,7 @@ widget:
|
|
77 |
<h1 style="font-size: 42px">GPT-JT<h1/>
|
78 |
|
79 |
# Model Summary
|
|
|
80 |
We present GPT-JT, a fork of GPT-6B, trained on 3.53 billion tokens, that outperforms most 100B+ parameter models at classification.
|
81 |
GPT-JT was trained with a new decentralized algorithm on computers networked with 1Gbps interconnect, in contrast with typical 100Gbps-1.6Tbps data center networks.
|
82 |
GPT-JT is a bidirectional dense model, which processes the prompt with bidirectional attention to fully leverage the context information, and uses causal attention only for token generation.
|
@@ -84,6 +85,7 @@ GPT-JT is a bidirectional dense model, which processes the prompt with bidirecti
|
|
84 |
***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!***
|
85 |
|
86 |
# Quick Start
|
|
|
87 |
```python
|
88 |
from transformers import pipeline
|
89 |
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
|
@@ -98,20 +100,55 @@ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1")
|
|
98 |
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
|
99 |
```
|
100 |
|
101 |
-
# Training
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
We fine-tune [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on NI, P3, COT, the pile data.
|
103 |
- [Natural-Instructions](https://github.com/allenai/natural-instructions)
|
104 |
- [P3](https://huggingface.co/datasets/Muennighoff/P3)
|
105 |
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json)
|
106 |
- [the pile](https://huggingface.co/datasets/the_pile)
|
107 |
|
108 |
-
We first conduct training for 2.62 billion tokens using the UL2 loss, followed by 0.92 billion tokens with a mixture of the above datasets: 5% of COT, 20% of P3, 20% of NI, and 55% of the Pile.
|
|
|
|
|
109 |
|
110 |
-
|
111 |
-
We used AdamW with a learning rate of 1e-5 and global batch size of 64.
|
112 |
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32.
|
113 |
We use both data parallelism and pipeline parallelism to conduct training.
|
114 |
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
|
115 |
|
116 |
-
|
117 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
<h1 style="font-size: 42px">GPT-JT<h1/>
|
78 |
|
79 |
# Model Summary
|
80 |
+
|
81 |
We present GPT-JT, a fork of GPT-6B, trained on 3.53 billion tokens, that outperforms most 100B+ parameter models at classification.
|
82 |
GPT-JT was trained with a new decentralized algorithm on computers networked with 1Gbps interconnect, in contrast with typical 100Gbps-1.6Tbps data center networks.
|
83 |
GPT-JT is a bidirectional dense model, which processes the prompt with bidirectional attention to fully leverage the context information, and uses causal attention only for token generation.
|
|
|
85 |
***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!***
|
86 |
|
87 |
# Quick Start
|
88 |
+
|
89 |
```python
|
90 |
from transformers import pipeline
|
91 |
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
|
|
|
100 |
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1")
|
101 |
```
|
102 |
|
103 |
+
# Training Details
|
104 |
+
|
105 |
+
## UL2 Training Objective
|
106 |
+
|
107 |
+
We train GPT-J using UL2 training objective [1][2].
|
108 |
+
The usual GPT model, including GPT-J, uses the lower left causal mask to do autoregressive generation, so for each token, it can only see the context information before itself.
|
109 |
+
In order to fully leverage the context information, we continue training with UL2 training objectives, and uses the lower right causal mask with prefix -- using bidirectional attention for the prompt and causal attention for token generation.
|
110 |
+
|
111 |
+
$$
|
112 |
+
\begin{bmatrix}
|
113 |
+
1 & 0 & 0 & 0 & 0 \\
|
114 |
+
1 & 1 & 0 & 0 & 0 \\
|
115 |
+
1 & 1 & 1 & 0 & 0 \\
|
116 |
+
1 & 1 & 1 & 1 & 0 \\
|
117 |
+
1 & 1 & 1 & 1 & 1
|
118 |
+
\end{bmatrix}
|
119 |
+
|
120 |
+
\begin{bmatrix}
|
121 |
+
1 & 1 & 1 & 0 & 0 \\
|
122 |
+
1 & 1 & 1 & 0 & 0 \\
|
123 |
+
1 & 1 & 1 & 0 & 0 \\
|
124 |
+
1 & 1 & 1 & 1 & 0 \\
|
125 |
+
1 & 1 & 1 & 1 & 1
|
126 |
+
\end{bmatrix}
|
127 |
+
$$
|
128 |
+
|
129 |
+
## Data
|
130 |
+
|
131 |
We fine-tune [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on NI, P3, COT, the pile data.
|
132 |
- [Natural-Instructions](https://github.com/allenai/natural-instructions)
|
133 |
- [P3](https://huggingface.co/datasets/Muennighoff/P3)
|
134 |
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json)
|
135 |
- [the pile](https://huggingface.co/datasets/the_pile)
|
136 |
|
137 |
+
We first conduct training for 2.62 billion tokens using the UL2 loss on the Pile, followed by 0.92 billion tokens with a mixture of the above datasets: 5% of COT, 20% of P3, 20% of NI, and 55% of the Pile.
|
138 |
+
|
139 |
+
## Hyperparameters
|
140 |
|
141 |
+
We used AdamW with a learning rate of 1e-5 and global batch size of 64 (16 for each data parallel worker).
|
|
|
142 |
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32.
|
143 |
We use both data parallelism and pipeline parallelism to conduct training.
|
144 |
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
|
145 |
|
146 |
+
## Infrastructure
|
147 |
+
|
148 |
+
We used [the Together Research Computer](https://together.xyz/) to conduct training.
|
149 |
+
|
150 |
+
# References
|
151 |
+
|
152 |
+
[1]: Tay, Yi, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler. "Unifying Language Learning Paradigms." arXiv preprint arXiv:2205.05131 (2022).
|
153 |
+
|
154 |
+
[2]: Tay, Yi, Jason Wei, Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia et al. "Transcending scaling laws with 0.1% extra compute." arXiv preprint arXiv:2210.11399 (2022).
|