Update README.md
Browse files
README.md
CHANGED
@@ -77,10 +77,11 @@ widget:
|
|
77 |
<h1 style="font-size: 42px">GPT-JT<h1/>
|
78 |
|
79 |
# Model Summary
|
80 |
-
We present GPT-JT, a fork of GPT-6B, trained on 3.
|
81 |
-
GPT-JT
|
|
|
82 |
|
83 |
-
|
84 |
|
85 |
# Quick Start
|
86 |
```python
|
@@ -104,8 +105,10 @@ We fine-tune [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on NI, P3, C
|
|
104 |
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json)
|
105 |
- [the pile](https://huggingface.co/datasets/the_pile)
|
106 |
|
|
|
|
|
107 |
# Hyperparameters
|
108 |
-
We used AdamW with a learning rate of 1e-5 and global batch size of 64
|
109 |
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32.
|
110 |
We use both data parallelism and pipeline parallelism to conduct training.
|
111 |
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
|
|
|
77 |
<h1 style="font-size: 42px">GPT-JT<h1/>
|
78 |
|
79 |
# Model Summary
|
80 |
+
We present GPT-JT, a fork of GPT-6B, trained on 3.53 billion tokens, that outperforms most 100B+ parameter models at classification.
|
81 |
+
GPT-JT was trained with a new decentralized algorithm on computers networked with 1Gbps interconnect, in contrast with typical 100Gbps-1.6Tbps data center networks.
|
82 |
+
GPT-JT is a bidirectional dense model, which processes the prompt with bidirectional attention to fully leverage the context information, and uses causal attention only for token generation.
|
83 |
|
84 |
+
***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!***
|
85 |
|
86 |
# Quick Start
|
87 |
```python
|
|
|
105 |
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json)
|
106 |
- [the pile](https://huggingface.co/datasets/the_pile)
|
107 |
|
108 |
+
We first conduct training for 2.62 billion tokens using the UL2 loss, followed by 0.92 billion tokens with a mixture of the above datasets: 5% of COT, 20% of P3, 20% of NI, and 55% of the Pile.
|
109 |
+
|
110 |
# Hyperparameters
|
111 |
+
We used AdamW with a learning rate of 1e-5 and global batch size of 64.
|
112 |
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32.
|
113 |
We use both data parallelism and pipeline parallelism to conduct training.
|
114 |
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
|