jchen8000 commited on
Commit
c48a156
·
verified ·
1 Parent(s): b051e7d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -137,11 +137,11 @@ with gr.Blocks(theme="Nymbo/Alyx_Theme") as demo:
137
  # load_sample = gr.Button("Alternatively, Load and Index [Attention Is All You Need.pdf] as a Sample")
138
  load_sample = gr.Button("Load and Index the following three papers as a RAG Demo")
139
  sample_description = gr.Markdown("""
140
- ## 1. Attention Is All You Need (Vaswani et al., 2017)
141
  This groundbreaking paper introduced the **Transformer** architecture. It revolutionized natural language processing by enabling parallelization and significantly improving performance on tasks like translation, leading to models like *BERT* and *GPT*.
142
- ## 2. Generative Adversarial Nets (Goodfellow et al., 2014)
143
  This paper proposed **GANs**, a novel framework for generative modeling using two neural networks—a generator and a discriminator—that compete in a zero-sum game. 
144
- ## 3. Parameter-Efficient Transfer Learning for NLP (Houlsby et al., 2019)
145
  This paper introduces **adapter modules**, a method for fine-tuning large pre-trained language models with significantly fewer parameters.
146
 
147
  It could take several minutes to load and index the files.
 
137
  # load_sample = gr.Button("Alternatively, Load and Index [Attention Is All You Need.pdf] as a Sample")
138
  load_sample = gr.Button("Load and Index the following three papers as a RAG Demo")
139
  sample_description = gr.Markdown("""
140
+ # 1. Attention Is All You Need (Vaswani et al., 2017)
141
  This groundbreaking paper introduced the **Transformer** architecture. It revolutionized natural language processing by enabling parallelization and significantly improving performance on tasks like translation, leading to models like *BERT* and *GPT*.
142
+ # 2. Generative Adversarial Nets (Goodfellow et al., 2014)
143
  This paper proposed **GANs**, a novel framework for generative modeling using two neural networks—a generator and a discriminator—that compete in a zero-sum game. 
144
+ # 3. Parameter-Efficient Transfer Learning for NLP (Houlsby et al., 2019)
145
  This paper introduces **adapter modules**, a method for fine-tuning large pre-trained language models with significantly fewer parameters.
146
 
147
  It could take several minutes to load and index the files.