Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ We apply **LoRA adaptation** on the **CLIP visual encoder** and add an **MLP hea
|
|
18 |
|
19 |
- *Dataset*: [KonIQ-10k](https://arxiv.org/pdf/1910.06180)
|
20 |
- *Architecture*: CLIP Vision Encoder (ViT-L/14) with *LoRA adaptation*
|
21 |
-
- *Loss Function*: Pearson correlation induced loss <img src="https://huggingface.co/PerceptCLIP/PerceptCLIP_IQA/resolve/main/loss_formula.png" width="
|
22 |
- *Optimizer*: AdamW
|
23 |
- *Learning Rate*: 5e-05
|
24 |
- *Batch Size*: 32
|
|
|
18 |
|
19 |
- *Dataset*: [KonIQ-10k](https://arxiv.org/pdf/1910.06180)
|
20 |
- *Architecture*: CLIP Vision Encoder (ViT-L/14) with *LoRA adaptation*
|
21 |
+
- *Loss Function*: Pearson correlation induced loss <img src="https://huggingface.co/PerceptCLIP/PerceptCLIP_IQA/resolve/main/loss_formula.png" width="150" style="vertical-align: middle;" />
|
22 |
- *Optimizer*: AdamW
|
23 |
- *Learning Rate*: 5e-05
|
24 |
- *Batch Size*: 32
|