Update README.md
Browse files
README.md
CHANGED
@@ -35,8 +35,7 @@ Our approach leverages *CLIP* as a prior for perceptual tasks, inspired by cogni
|
|
35 |
## Performance
|
36 |
|
37 |
The model was trained on the *LaMem dataset* and exhibits *state-of-the-art generalization* to the *THINGS memorability dataset*.
|
38 |
-
For more models and results on the
|
39 |
-
|
40 |
## Usage
|
41 |
|
42 |
To use the model for inference:
|
|
|
35 |
## Performance
|
36 |
|
37 |
The model was trained on the *LaMem dataset* and exhibits *state-of-the-art generalization* to the *THINGS memorability dataset*.
|
38 |
+
For more models and results on the five common splits of LaMem, please refer to our paper. *We achieve state-of-the-art (SOTA) performance on the LaMem dataset based on both Spearman correlation and MSE metrics*.
|
|
|
39 |
## Usage
|
40 |
|
41 |
To use the model for inference:
|