|
--- |
|
datasets: |
|
- natural_instructions |
|
- the_pile |
|
- cot |
|
- Muennighoff/P3 |
|
inference: |
|
parameters: |
|
max_new_tokens: 5 |
|
temperature: 1.0 |
|
top_k: 1 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
widget: |
|
- |
|
example_title: "Sentiment Analysis" |
|
text: |- |
|
The task is to label the post's emotion as sadness, joy, love, anger, fear, or surprise. |
|
|
|
Input: I'm feeling quite sad and sorry for myself but ill snap out of it soon. |
|
Output: sadness |
|
|
|
Input: I am just feeling cranky and blue. |
|
Output: anger |
|
|
|
Input: I can have for a treat or if i am feeling festive. |
|
Output: |
|
- |
|
example_title: "Country Currency" |
|
text: |- |
|
Return the currency of the given country. |
|
|
|
Input: Switzerland |
|
Output: Swiss Franc |
|
|
|
Input: India |
|
Output: |
|
- |
|
example_title: "Tweet Eval Hate" |
|
text: |- |
|
Label whether the following tweet contains hate speech against either immigrants or women. Hate Speech (HS) is commonly defined as any communication that disparages a person or a group on the basis of some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics. |
|
Possible labels: |
|
1. hate speech |
|
2. not hate speech |
|
|
|
Tweet: HOW REFRESHING! In South Korea, there is no such thing as 'political correctness" when it comes to dealing with Muslim refugee wannabes via @user |
|
Label: hate speech |
|
|
|
Tweet: New to Twitter-- any men on here know what the process is to get #verified? |
|
Label: not hate speech |
|
|
|
Tweet: Dont worry @user you are and will always be the most hysterical woman. |
|
Label: |
|
- |
|
example_title: "Entity Recognition" |
|
text: |- |
|
Extract all the names of people, places, and organizations from the following sentences. |
|
|
|
Sentence: Satya Nadella, the CEO of Microsoft, was visiting the Bahamas last May. |
|
Entities: Satya Nadella, Microsoft, Bahamas |
|
|
|
Sentence: Pacific Northwest cities include Seattle and Portland, which I have visited with Vikash. |
|
Entities: |
|
- |
|
example_title: "Data Clearning" |
|
text: |- |
|
Format the data into a CSV file: |
|
|
|
Input: Jane Doe [email protected] (520) 382 2435 |
|
Output: Jane Doe,[email protected],520-382-2435 |
|
|
|
Input: Peter Lee (510) 333-2429 email: [email protected] |
|
Output: |
|
--- |
|
|
|
<h1 style="font-size: 42px">GPT-JT<h1/> |
|
|
|
# Model Summary |
|
We present GPT-JT, a fork of GPT-6B, trained on 3.53 billion tokens, that outperforms most 100B+ parameter models at classification. |
|
GPT-JT was trained with a new decentralized algorithm on computers networked with 1Gbps interconnect, in contrast with typical 100Gbps-1.6Tbps data center networks. |
|
GPT-JT is a bidirectional dense model, which processes the prompt with bidirectional attention to fully leverage the context information, and uses causal attention only for token generation. |
|
|
|
***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!*** |
|
|
|
# Quick Start |
|
```python |
|
from transformers import pipeline |
|
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1') |
|
pipe('''"I love this!" Is it positive? A:''') |
|
``` |
|
|
|
or |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1") |
|
model = AutoModelForCausalLM.from_pretrained("togethercomputer/GPT-JT-6B-v1") |
|
``` |
|
|
|
# Training Data |
|
We fine-tune [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on NI, P3, COT, the pile data. |
|
- [Natural-Instructions](https://github.com/allenai/natural-instructions) |
|
- [P3](https://huggingface.co/datasets/Muennighoff/P3) |
|
- [MMLU-COT](https://github.com/jasonwei20/flan-2/blob/main/mmlu-cot.json) |
|
- [the pile](https://huggingface.co/datasets/the_pile) |
|
|
|
We first conduct training for 2.62 billion tokens using the UL2 loss, followed by 0.92 billion tokens with a mixture of the above datasets: 5% of COT, 20% of P3, 20% of NI, and 55% of the Pile. |
|
|
|
# Hyperparameters |
|
We used AdamW with a learning rate of 1e-5 and global batch size of 64. |
|
We used mix-precision training where the activation is in FP16 while the optimizer states are kept in FP32. |
|
We use both data parallelism and pipeline parallelism to conduct training. |
|
During training, we truncate the input sequence to 2048 tokens, and for input sequence that contains less than 2048 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency. |
|
|
|
# Infrastructure |
|
We used [the Together Research Computer](https://together.xyz/) to conduct training. |