awacke1 commited on
Commit
6530e47
Β·
1 Parent(s): 2d77b0d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +83 -0
app.py CHANGED
@@ -17,4 +17,87 @@ with gr.Blocks() as demo:
17
 
18
  btn.click(complete_with_gpt, textbox, textbox)
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  demo.launch()
 
17
 
18
  btn.click(complete_with_gpt, textbox, textbox)
19
 
20
+ with gr.Row():
21
+ gr.Markdown("""
22
+ # Big Science creates 176 Billion Parameter Large Language Model Setting New Record for Most Performant and Efficient AI Model for Science Ever!
23
+
24
+ 1. Video Playlist to Check it out: https://www.youtube.com/playlist?list=PLHgX2IExbFouqnsIqziThlPCX_miiDq14
25
+ 2. Summary of Important Models and Sizes:
26
+
27
+ # Model Sizes to Date
28
+
29
+ Model Name | Model Size (in Parameters)
30
+ ----------------|---------------------------------
31
+ BigScience-tr11-176B|176 billion
32
+ GPT-3|175 billion
33
+ OpenAI's DALL-E 2.0|500 million
34
+ NVIDIA's Megatron|8.3 billion
35
+ Google's BERT|340 million
36
+ GPT-2|1.5 billion
37
+ OpenAI's GPT-1|117 million
38
+ ELMo|90 million
39
+ ULMFiT|100 million
40
+ Transformer-XL|250 million
41
+ XLNet|210 million
42
+ RoBERTa|125 million
43
+ ALBERT|12 million
44
+ DistilBERT|66 million
45
+
46
+ 3. Background Information on ChatGPT, Bloom from BigScience on HuggingFace Platform, and RLHF DeepRL and One to Few Shot Learning and Generators:
47
+
48
+
49
+
50
+ # ChatGPT Datasets:
51
+ 1. WebText
52
+ 2. Common Crawl
53
+ 3. BooksCorpus
54
+ 4. English Wikipedia
55
+ 5. Toronto Books Corpus
56
+ 6. OpenWebText
57
+
58
+ # Comparison to BigScience Model:
59
+
60
+ # Big Science - How to get started
61
+
62
+ Big Science is a 176B parameter new ML model that was trained on a set of datasets for Natural Language processing, and many other tasks that are not yet explored.. Below is the set of the papers, models, links, and datasets around big science which promises to be the best, most recent large model of its kind benefitting all science pursuits.
63
+
64
+ # Model: https://huggingface.co/bigscience/bloom
65
+
66
+ # Papers:
67
+ 1. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model https://arxiv.org/abs/2211.05100
68
+ 2. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism https://arxiv.org/abs/1909.08053
69
+ 3. 8-bit Optimizers via Block-wise Quantization https://arxiv.org/abs/2110.02861
70
+ 4. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation https://arxiv.org/abs/2108.12409
71
+ 5. https://huggingface.co/models?other=doi:10.57967/hf/0003
72
+ 6. 217 Other Models optimizing use of bloom via specialization: https://huggingface.co/models?other=bloom
73
+
74
+ # Datasets
75
+ 1. Universal Dependencies: https://paperswithcode.com/dataset/universal-dependencies
76
+ 2. WMT 2014: https://paperswithcode.com/dataset/wmt-2014
77
+ 3. The Pile: https://paperswithcode.com/dataset/the-pile
78
+ 4. HumanEval: https://paperswithcode.com/dataset/humaneval
79
+ 5. FLORES-101: https://paperswithcode.com/dataset/flores-101
80
+ 6. CrowS-Pairs: https://paperswithcode.com/dataset/crows-pairs
81
+ 7. WikiLingua: https://paperswithcode.com/dataset/wikilingua
82
+ 8. MTEB: https://paperswithcode.com/dataset/mteb
83
+ 9. xP3: https://paperswithcode.com/dataset/xp3
84
+ 10. DiaBLa: https://paperswithcode.com/dataset/diabla
85
+
86
+ # Deep RL ML Strategy
87
+
88
+ 1. Language Model Preparation, Human Augmented with Supervised Fine Tuning
89
+ 2. Reward Model Training with Prompts Dataset Multi-Model Generate Data to Rank
90
+ 3. Fine Tuning with Reinforcement Reward and Distance Distribution Regret Score
91
+ 4. Proximal Policy Optimization Fine Tuning
92
+
93
+ # Variations - Preference Model Pretraining
94
+
95
+ 1. Use Ranking Datasets Sentiment - Thumbs Up/Down, Distribution
96
+ 2. Online Version Getting Feedback
97
+ 3. OpenAI - InstructGPT - Humans generate LM Training Text
98
+ 4. DeepMind - Advantage Actor Critic Sparrow, GopherCite
99
+ 5. Reward Model Human Prefence Feedback
100
+
101
+ """)
102
+
103
  demo.launch()