awacke1's picture
Update app.py
f21323a
raw
history blame
6.78 kB
import gradio as gr
#api = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
api = gr.Interface.load("models/bigscience/bloom")
def complete_with_gpt(text):
# Use the last 50 characters of the text as context
# return text[:-50] + api(text[-50:])
# Use the last 100 characters of the text as context
return text[:-100] + api(text[-100:])
with gr.Blocks() as demo:
with gr.Row():
textbox = gr.Textbox(placeholder="Type here and press enter...", lines=14)
with gr.Column():
btn = gr.Button("Generate")
btn.click(complete_with_gpt, textbox, textbox)
with gr.Row():
gr.Markdown("""
# Outline of Exciting AI Developments! πŸ€–πŸ’»πŸ”¬
Here is an outline of some of the most exciting recent developments in AI:
## Language Models πŸ—£οΈ
πŸ† Bloom sets new record for most performant and efficient AI model in science! 🌸
### Comparison of Large Language Models
| Model Name | Model Size (in Parameters) |
| ----------------- | -------------------------- |
| BigScience-tr11-176B | 176 billion |
| GPT-3 | 175 billion |
| OpenAI's DALL-E 2.0 | 500 million |
| NVIDIA's Megatron | 8.3 billion |
| Transformer-XL | 250 million |
| XLNet | 210 million |
## ChatGPT Datasets πŸ“š
- WebText
- Common Crawl
- BooksCorpus
- English Wikipedia
- Toronto Books Corpus
- OpenWebText
## Big Science Model πŸš€
- πŸ“œ Papers:
1. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model [Paper](https://arxiv.org/abs/2211.05100)
2. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism [Paper](https://arxiv.org/abs/1909.08053)
3. 8-bit Optimizers via Block-wise Quantization [Paper](https://arxiv.org/abs/2110.02861)
4. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation [Paper](https://arxiv.org/abs/2108.12409)
5. [Other papers related to Big Science](https://huggingface.co/models?other=doi:10.57967/hf/0003)
6. [217 other models optimized for use with Bloom](https://huggingface.co/models?other=bloom)
- πŸ“š Datasets:
- Universal Dependencies: A collection of annotated corpora for natural language processing in a range of languages, with a focus on dependency parsing.
- [Universal Dependencies official website.](https://universaldependencies.org/)
- WMT 2014: The fourth edition of the Workshop on Statistical Machine Translation, featuring shared tasks on translating between English and various other languages.
- [WMT14 website.](http://www.statmt.org/wmt14/)
- The Pile: An English language corpus of diverse text, sourced from various places on the internet.
- [The Pile official website.](https://pile.eleuther.ai/)
- HumanEval: A dataset of English sentences, annotated with human judgments on a range of linguistic qualities.
- [HumanEval: An Evaluation Benchmark for Language Understanding](https://github.com/google-research-datasets/humaneval) by Gabriel Ilharco, Daniel Loureiro, Pedro Rodriguez, and Afonso Mendes.
- FLORES-101: A dataset of parallel sentences in 101 languages, designed for multilingual machine translation.
- [FLORES-101: A Massively Multilingual Parallel Corpus for Language Understanding](https://flores101.opennmt.net/) by Aman Madaan, Shruti Rijhwani, Raghav Gupta, and Mitesh M. Khapra.
- CrowS-Pairs: A dataset of sentence pairs, designed for evaluating the plausibility of generated text.
- [CrowS-Pairs: A Challenge Dataset for Plausible Plausibility Judgments](https://github.com/stanford-cogsci/crows-pairs) by Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Pascale Fung, and Caiming Xiong.
- WikiLingua: A dataset of parallel sentences in 75 languages, sourced from Wikipedia.
- [WikiLingua: A New Benchmark Dataset for Cross-Lingual Wikification](https://arxiv.org/abs/2105.08031) by Jiarui Yao, Yanqiao Zhu, Ruihan Bao, Guosheng Lin, Lidong Bing, and Bei Shi.
- MTEB: A dataset of English sentences, annotated with their entailment relationships with respect to other sentences.
- [Multi-Task Evaluation Benchmark for Natural Language Inference](https://github.com/google-research-datasets/mteb) by MichaΕ‚ Lukasik, Marcin Junczys-Dowmunt, and Houda Bouamor.
- xP3: A dataset of English sentences, annotated with their paraphrase relationships with respect to other sentences.
- [xP3: A Large-Scale Evaluation Benchmark for Paraphrase Identification in Context](https://github.com/nyu-dl/xp3) by Aniket Didolkar, James Mayfield, Markus Saers, and Jason Baldridge.
- DiaBLa: A dataset of English dialogue, annotated with dialogue acts.
- [A Large-Scale Corpus for Conversation Disentanglement](https://github.com/HLTCHKUST/DiaBLA) by Samuel Broscheit, AntΓ³nio Branco, and AndrΓ© F. T. Martins.
- πŸ“š Dataset Papers with Code
1. [Universal Dependencies](https://paperswithcode.com/dataset/universal-dependencies)
2. [WMT 2014](https://paperswithcode.com/dataset/wmt-2014)
3. [The Pile](https://paperswithcode.com/dataset/the-pile)
4. [HumanEval](https://paperswithcode.com/dataset/humaneval)
5. [FLORES-101](https://paperswithcode.com/dataset/flores-101)
6. [CrowS-Pairs](https://paperswithcode.com/dataset/crows-pairs)
7. [WikiLingua](https://paperswithcode.com/dataset/wikilingua)
8. [MTEB](https://paperswithcode.com/dataset/mteb)
9. [xP3](https://paperswithcode.com/dataset/xp3)
10. [DiaBLa](https://paperswithcode.com/dataset/diabla)
# Deep RL ML Strategy 🧠
The AI strategies are:
- Language Model Preparation using Human Augmented with Supervised Fine Tuning πŸ€–
- Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank 🎁
- Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score 🎯
- Proximal Policy Optimization Fine Tuning 🀝
- Variations - Preference Model Pretraining πŸ€”
- Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution πŸ“Š
- Online Version Getting Feedback πŸ’¬
- OpenAI - InstructGPT - Humans generate LM Training Text πŸ”
- DeepMind - Advantage Actor Critic Sparrow, GopherCite 🦜
- Reward Model Human Prefence Feedback πŸ†
For more information on specific techniques and implementations, check out the following resources:
- OpenAI's paper on [GPT-3](https://arxiv.org/abs/2005.14165) which details their Language Model Preparation approach
- DeepMind's paper on [SAC](https://arxiv.org/abs/1801.01290) which describes the Advantage Actor Critic algorithm
- OpenAI's paper on [Reward Learning](https://arxiv.org/abs/1810.06580) which explains their approach to training Reward Models
- OpenAI's blog post on [GPT-3's fine-tuning process](https://openai.com/blog/fine-tuning-gpt-3/)
""")
demo.launch()