Update README.md
Browse files
README.md
CHANGED
@@ -80,7 +80,7 @@ widget:
|
|
80 |
We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
|
81 |
GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
|
82 |
|
83 |
-
**Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/
|
84 |
|
85 |
# Quick Start
|
86 |
```python
|
|
|
80 |
We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
|
81 |
GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
|
82 |
|
83 |
+
**Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!**
|
84 |
|
85 |
# Quick Start
|
86 |
```python
|