Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints
juewang commited on
Commit
ab73a1e
1 Parent(s): 918e74e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -80,7 +80,7 @@ widget:
80
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
81
  GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
82
 
83
- **Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/TOMA-app)!.**
84
 
85
  # Quick Start
86
  ```python
 
80
  We present GPT-JT, a fork of GPT-6B, trained for 20,000 steps, that outperforms most 100B+ parameter models at classification, and improves most tasks relative to GPT-J-6B. GPT-JT was trained with a new decentralized algorithm on computers networked on slow 1Gbps links.
81
  GPT-JT is a bidirectional dense model, trained through UL2 objective with NI, P3, COT, the pile data.
82
 
83
+ **Please check out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!**
84
 
85
  # Quick Start
86
  ```python