D0k-tor commited on
Commit
6939269
·
1 Parent(s): 7b99df8

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +5 -7
app.py CHANGED
@@ -59,16 +59,14 @@ with gr.Blocks() as demo:
59
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 2rem; margin-bottom: 1.5rem">
60
  In the field of large language models, the challenge of fine-tuning has long perplexed researchers. Microsoft, however, has unveiled an innovative solution called <b>Low-Rank Adaptation (LoRA)</b>. With the emergence of behemoth models like GPT-3 boasting billions of parameters, the cost of fine-tuning them for specific tasks or domains has become exorbitant.
61
  LoRA offers a groundbreaking approach by freezing the weights of pre-trained models and introducing trainable layers known as <b>rank-decomposition matrices in each transformer block</b>. This ingenious technique significantly reduces the number of trainable parameters and minimizes GPU memory requirements, as gradients no longer need to be computed for the majority of model weights.
 
 
 
62
  </h2>
63
- <h3>
64
- You can find more info here: [Linkedin article] (https://www.linkedin.com/pulse/fine-tuning-image-to-text-algorithms-withlora-daniel-puente-viejo)
65
- </h3>
66
  </div>
67
  """)
68
  gr.inputs.Image(label="Upload any Image", type = 'pil', optional=True)
69
- gr.outputs.Textbox(type="text",label="Captions")
70
- # gr.Image(label="Upload any Image", type = 'pil', optional=True)
71
- # gr.Textbox(type="text",label="Captions")
72
-
73
 
74
  demo.launch(debug=True)
 
59
  <h2 style="text-align: left; font-weight: 450; font-size: 1rem; margin-top: 2rem; margin-bottom: 1.5rem">
60
  In the field of large language models, the challenge of fine-tuning has long perplexed researchers. Microsoft, however, has unveiled an innovative solution called <b>Low-Rank Adaptation (LoRA)</b>. With the emergence of behemoth models like GPT-3 boasting billions of parameters, the cost of fine-tuning them for specific tasks or domains has become exorbitant.
61
  LoRA offers a groundbreaking approach by freezing the weights of pre-trained models and introducing trainable layers known as <b>rank-decomposition matrices in each transformer block</b>. This ingenious technique significantly reduces the number of trainable parameters and minimizes GPU memory requirements, as gradients no longer need to be computed for the majority of model weights.
62
+ <br>
63
+ <br>
64
+ You can find more info here: <a href="https://www.linkedin.com/pulse/fine-tuning-image-to-text-algorithms-with-lora-daniel-puente-viejo" target="_blank">Linkedin article</a>
65
  </h2>
66
+
 
 
67
  </div>
68
  """)
69
  gr.inputs.Image(label="Upload any Image", type = 'pil', optional=True)
70
+ gr.outputs.Textbox(type="text",label="Captions")
 
 
 
71
 
72
  demo.launch(debug=True)