Update README.md
Browse files
README.md
CHANGED
@@ -198,7 +198,7 @@ widget:
|
|
198 |
We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
|
199 |
GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
|
200 |
|
201 |
-
**Please check out our
|
202 |
|
203 |
# Quick Start
|
204 |
```python
|
|
|
198 |
We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
|
199 |
GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
|
200 |
|
201 |
+
**Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/TOMA-app)!.**
|
202 |
|
203 |
# Quick Start
|
204 |
```python
|