clip-ViT-B-32 / README.md
yonting's picture
Add MTEB results
2b3e657
|
raw
history blame
43.6 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - mteb
model-index:
  - name: clip-ViT-B-32
    results:
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_counterfactual
          name: MTEB AmazonCounterfactualClassification (en)
          config: en
          split: test
          revision: e8379541af4e31359cca9fbcf4b00f2671dba205
        metrics:
          - type: accuracy
            value: 57.999999999999986
          - type: ap
            value: 23.966099106216358
          - type: f1
            value: 52.8203944454417
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_polarity
          name: MTEB AmazonPolarityClassification
          config: default
          split: test
          revision: e2d317d38cd51312af73b3d32a06d1a08b442046
        metrics:
          - type: accuracy
            value: 62.366
          - type: ap
            value: 57.98090324593318
          - type: f1
            value: 61.62762218315074
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_reviews_multi
          name: MTEB AmazonReviewsClassification (en)
          config: en
          split: test
          revision: 1399c76144fd37290681b995c656ef9b2e06e26d
        metrics:
          - type: accuracy
            value: 28.584
          - type: f1
            value: 28.463306116150783
      - task:
          type: Retrieval
        dataset:
          type: arguana
          name: MTEB ArguAna
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 6.259
          - type: map_at_10
            value: 11.542
          - type: map_at_100
            value: 12.859000000000002
          - type: map_at_1000
            value: 12.966
          - type: map_at_3
            value: 9.128
          - type: map_at_5
            value: 10.262
          - type: mrr_at_1
            value: 6.259
          - type: mrr_at_10
            value: 11.536
          - type: mrr_at_100
            value: 12.859000000000002
          - type: mrr_at_1000
            value: 12.967
          - type: mrr_at_3
            value: 9.128
          - type: mrr_at_5
            value: 10.262
          - type: ndcg_at_1
            value: 6.259
          - type: ndcg_at_10
            value: 15.35
          - type: ndcg_at_100
            value: 22.107
          - type: ndcg_at_1000
            value: 25.355
          - type: ndcg_at_3
            value: 10.172
          - type: ndcg_at_5
            value: 12.22
          - type: precision_at_1
            value: 6.259
          - type: precision_at_10
            value: 2.795
          - type: precision_at_100
            value: 0.603
          - type: precision_at_1000
            value: 0.087
          - type: precision_at_3
            value: 4.41
          - type: precision_at_5
            value: 3.642
          - type: recall_at_1
            value: 6.259
          - type: recall_at_10
            value: 27.951999999999998
          - type: recall_at_100
            value: 60.313
          - type: recall_at_1000
            value: 86.771
          - type: recall_at_3
            value: 13.229
          - type: recall_at_5
            value: 18.208
      - task:
          type: Clustering
        dataset:
          type: mteb/arxiv-clustering-p2p
          name: MTEB ArxivClusteringP2P
          config: default
          split: test
          revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
        metrics:
          - type: v_measure
            value: 30.95753257205936
      - task:
          type: Clustering
        dataset:
          type: mteb/arxiv-clustering-s2s
          name: MTEB ArxivClusteringS2S
          config: default
          split: test
          revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
        metrics:
          - type: v_measure
            value: 26.586511396557583
      - task:
          type: Reranking
        dataset:
          type: mteb/askubuntudupquestions-reranking
          name: MTEB AskUbuntuDupQuestions
          config: default
          split: test
          revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
        metrics:
          - type: map
            value: 51.090393666506415
          - type: mrr
            value: 65.19412566503979
      - task:
          type: STS
        dataset:
          type: mteb/biosses-sts
          name: MTEB BIOSSES
          config: default
          split: test
          revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
        metrics:
          - type: cos_sim_pearson
            value: 69.9163188743249
          - type: cos_sim_spearman
            value: 64.1345938803495
          - type: euclidean_pearson
            value: 67.36703723549599
          - type: euclidean_spearman
            value: 63.067702100617005
          - type: manhattan_pearson
            value: 71.6901307580259
          - type: manhattan_spearman
            value: 67.04128661733944
      - task:
          type: Classification
        dataset:
          type: mteb/banking77
          name: MTEB Banking77Classification
          config: default
          split: test
          revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
        metrics:
          - type: accuracy
            value: 73.22402597402598
          - type: f1
            value: 73.12739303105114
      - task:
          type: Clustering
        dataset:
          type: mteb/biorxiv-clustering-p2p
          name: MTEB BiorxivClusteringP2P
          config: default
          split: test
          revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
        metrics:
          - type: v_measure
            value: 28.97385566120484
      - task:
          type: Clustering
        dataset:
          type: mteb/biorxiv-clustering-s2s
          name: MTEB BiorxivClusteringS2S
          config: default
          split: test
          revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
        metrics:
          - type: v_measure
            value: 27.08579813861177
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackAndroidRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 7.106999999999999
          - type: map_at_10
            value: 11.797
          - type: map_at_100
            value: 12.6
          - type: map_at_1000
            value: 12.711
          - type: map_at_3
            value: 10.369
          - type: map_at_5
            value: 10.881
          - type: mrr_at_1
            value: 9.299
          - type: mrr_at_10
            value: 15.076
          - type: mrr_at_100
            value: 15.842
          - type: mrr_at_1000
            value: 15.928
          - type: mrr_at_3
            value: 13.4
          - type: mrr_at_5
            value: 14.044
          - type: ndcg_at_1
            value: 9.299
          - type: ndcg_at_10
            value: 15.21
          - type: ndcg_at_100
            value: 19.374
          - type: ndcg_at_1000
            value: 22.527
          - type: ndcg_at_3
            value: 12.383
          - type: ndcg_at_5
            value: 13.096
          - type: precision_at_1
            value: 9.299
          - type: precision_at_10
            value: 3.1620000000000004
          - type: precision_at_100
            value: 0.662
          - type: precision_at_1000
            value: 0.11800000000000001
          - type: precision_at_3
            value: 6.3420000000000005
          - type: precision_at_5
            value: 4.492
          - type: recall_at_1
            value: 7.106999999999999
          - type: recall_at_10
            value: 22.544
          - type: recall_at_100
            value: 41.002
          - type: recall_at_1000
            value: 63.67699999999999
          - type: recall_at_3
            value: 14.316999999999998
          - type: recall_at_5
            value: 16.367
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackEnglishRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 6.632000000000001
          - type: map_at_10
            value: 9.067
          - type: map_at_100
            value: 9.487
          - type: map_at_1000
            value: 9.563
          - type: map_at_3
            value: 8.344999999999999
          - type: map_at_5
            value: 8.742999999999999
          - type: mrr_at_1
            value: 8.599
          - type: mrr_at_10
            value: 11.332
          - type: mrr_at_100
            value: 11.77
          - type: mrr_at_1000
            value: 11.843
          - type: mrr_at_3
            value: 10.478
          - type: mrr_at_5
            value: 10.959000000000001
          - type: ndcg_at_1
            value: 8.599
          - type: ndcg_at_10
            value: 10.843
          - type: ndcg_at_100
            value: 13.023000000000001
          - type: ndcg_at_1000
            value: 15.409
          - type: ndcg_at_3
            value: 9.673
          - type: ndcg_at_5
            value: 10.188
          - type: precision_at_1
            value: 8.599
          - type: precision_at_10
            value: 2.038
          - type: precision_at_100
            value: 0.383
          - type: precision_at_1000
            value: 0.074
          - type: precision_at_3
            value: 4.756
          - type: precision_at_5
            value: 3.3890000000000002
          - type: recall_at_1
            value: 6.632000000000001
          - type: recall_at_10
            value: 13.952
          - type: recall_at_100
            value: 23.966
          - type: recall_at_1000
            value: 41.411
          - type: recall_at_3
            value: 10.224
          - type: recall_at_5
            value: 11.799
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackGamingRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 11.153
          - type: map_at_10
            value: 15.751000000000001
          - type: map_at_100
            value: 16.464000000000002
          - type: map_at_1000
            value: 16.561
          - type: map_at_3
            value: 14.552000000000001
          - type: map_at_5
            value: 15.136
          - type: mrr_at_1
            value: 13.041
          - type: mrr_at_10
            value: 17.777
          - type: mrr_at_100
            value: 18.427
          - type: mrr_at_1000
            value: 18.504
          - type: mrr_at_3
            value: 16.479
          - type: mrr_at_5
            value: 17.175
          - type: ndcg_at_1
            value: 13.041
          - type: ndcg_at_10
            value: 18.581
          - type: ndcg_at_100
            value: 22.174
          - type: ndcg_at_1000
            value: 24.795
          - type: ndcg_at_3
            value: 16.185
          - type: ndcg_at_5
            value: 17.183
          - type: precision_at_1
            value: 13.041
          - type: precision_at_10
            value: 3.2230000000000003
          - type: precision_at_100
            value: 0.557
          - type: precision_at_1000
            value: 0.086
          - type: precision_at_3
            value: 7.544
          - type: precision_at_5
            value: 5.279
          - type: recall_at_1
            value: 11.153
          - type: recall_at_10
            value: 25.052999999999997
          - type: recall_at_100
            value: 41.521
          - type: recall_at_1000
            value: 61.138000000000005
          - type: recall_at_3
            value: 18.673000000000002
          - type: recall_at_5
            value: 20.964
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackGisRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 5.303
          - type: map_at_10
            value: 7.649
          - type: map_at_100
            value: 7.983
          - type: map_at_1000
            value: 8.067
          - type: map_at_3
            value: 6.938
          - type: map_at_5
            value: 7.259
          - type: mrr_at_1
            value: 5.763
          - type: mrr_at_10
            value: 8.277
          - type: mrr_at_100
            value: 8.665000000000001
          - type: mrr_at_1000
            value: 8.747
          - type: mrr_at_3
            value: 7.457999999999999
          - type: mrr_at_5
            value: 7.808
          - type: ndcg_at_1
            value: 5.763
          - type: ndcg_at_10
            value: 9.1
          - type: ndcg_at_100
            value: 11.253
          - type: ndcg_at_1000
            value: 13.847999999999999
          - type: ndcg_at_3
            value: 7.521999999999999
          - type: ndcg_at_5
            value: 8.094
          - type: precision_at_1
            value: 5.763
          - type: precision_at_10
            value: 1.514
          - type: precision_at_100
            value: 0.28700000000000003
          - type: precision_at_1000
            value: 0.054
          - type: precision_at_3
            value: 3.277
          - type: precision_at_5
            value: 2.282
          - type: recall_at_1
            value: 5.303
          - type: recall_at_10
            value: 13.126
          - type: recall_at_100
            value: 23.855
          - type: recall_at_1000
            value: 44.417
          - type: recall_at_3
            value: 8.556
          - type: recall_at_5
            value: 10.006
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackMathematicaRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 2.153
          - type: map_at_10
            value: 3.447
          - type: map_at_100
            value: 3.73
          - type: map_at_1000
            value: 3.8219999999999996
          - type: map_at_3
            value: 3.0269999999999997
          - type: map_at_5
            value: 3.283
          - type: mrr_at_1
            value: 2.612
          - type: mrr_at_10
            value: 4.289
          - type: mrr_at_100
            value: 4.6080000000000005
          - type: mrr_at_1000
            value: 4.713
          - type: mrr_at_3
            value: 3.669
          - type: mrr_at_5
            value: 4.005
          - type: ndcg_at_1
            value: 2.612
          - type: ndcg_at_10
            value: 4.422000000000001
          - type: ndcg_at_100
            value: 6.15
          - type: ndcg_at_1000
            value: 9.25
          - type: ndcg_at_3
            value: 3.486
          - type: ndcg_at_5
            value: 3.95
          - type: precision_at_1
            value: 2.612
          - type: precision_at_10
            value: 0.8829999999999999
          - type: precision_at_100
            value: 0.211
          - type: precision_at_1000
            value: 0.059000000000000004
          - type: precision_at_3
            value: 1.6580000000000001
          - type: precision_at_5
            value: 1.294
          - type: recall_at_1
            value: 2.153
          - type: recall_at_10
            value: 6.607
          - type: recall_at_100
            value: 14.707
          - type: recall_at_1000
            value: 37.99
          - type: recall_at_3
            value: 4.122
          - type: recall_at_5
            value: 5.241
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackPhysicsRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 7.976999999999999
          - type: map_at_10
            value: 11.745
          - type: map_at_100
            value: 12.427000000000001
          - type: map_at_1000
            value: 12.528
          - type: map_at_3
            value: 10.478
          - type: map_at_5
            value: 11.224
          - type: mrr_at_1
            value: 9.432
          - type: mrr_at_10
            value: 14.021
          - type: mrr_at_100
            value: 14.734
          - type: mrr_at_1000
            value: 14.813
          - type: mrr_at_3
            value: 12.576
          - type: mrr_at_5
            value: 13.414000000000001
          - type: ndcg_at_1
            value: 9.432
          - type: ndcg_at_10
            value: 14.341000000000001
          - type: ndcg_at_100
            value: 18.168
          - type: ndcg_at_1000
            value: 21.129
          - type: ndcg_at_3
            value: 11.909
          - type: ndcg_at_5
            value: 13.139999999999999
          - type: precision_at_1
            value: 9.432
          - type: precision_at_10
            value: 2.6759999999999997
          - type: precision_at_100
            value: 0.563
          - type: precision_at_1000
            value: 0.098
          - type: precision_at_3
            value: 5.679
          - type: precision_at_5
            value: 4.216
          - type: recall_at_1
            value: 7.976999999999999
          - type: recall_at_10
            value: 19.983999999999998
          - type: recall_at_100
            value: 37.181
          - type: recall_at_1000
            value: 58.714999999999996
          - type: recall_at_3
            value: 13.375
          - type: recall_at_5
            value: 16.54
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackProgrammersRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 5.682
          - type: map_at_10
            value: 7.817
          - type: map_at_100
            value: 8.3
          - type: map_at_1000
            value: 8.378
          - type: map_at_3
            value: 7.13
          - type: map_at_5
            value: 7.467
          - type: mrr_at_1
            value: 6.848999999999999
          - type: mrr_at_10
            value: 9.687999999999999
          - type: mrr_at_100
            value: 10.208
          - type: mrr_at_1000
            value: 10.281
          - type: mrr_at_3
            value: 8.770999999999999
          - type: mrr_at_5
            value: 9.256
          - type: ndcg_at_1
            value: 6.848999999999999
          - type: ndcg_at_10
            value: 9.519
          - type: ndcg_at_100
            value: 12.303
          - type: ndcg_at_1000
            value: 15.004999999999999
          - type: ndcg_at_3
            value: 8.077
          - type: ndcg_at_5
            value: 8.656
          - type: precision_at_1
            value: 6.848999999999999
          - type: precision_at_10
            value: 1.735
          - type: precision_at_100
            value: 0.363
          - type: precision_at_1000
            value: 0.073
          - type: precision_at_3
            value: 3.7289999999999996
          - type: precision_at_5
            value: 2.717
          - type: recall_at_1
            value: 5.682
          - type: recall_at_10
            value: 13.001
          - type: recall_at_100
            value: 25.916
          - type: recall_at_1000
            value: 46.303
          - type: recall_at_3
            value: 8.949
          - type: recall_at_5
            value: 10.413
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 5.441
          - type: map_at_10
            value: 7.997500000000002
          - type: map_at_100
            value: 8.47225
          - type: map_at_1000
            value: 8.557083333333333
          - type: map_at_3
            value: 7.17025
          - type: map_at_5
            value: 7.597833333333333
          - type: mrr_at_1
            value: 6.6329166666666675
          - type: mrr_at_10
            value: 9.596583333333333
          - type: mrr_at_100
            value: 10.094416666666667
          - type: mrr_at_1000
            value: 10.171583333333334
          - type: mrr_at_3
            value: 8.628416666666666
          - type: mrr_at_5
            value: 9.143416666666667
          - type: ndcg_at_1
            value: 6.6329166666666675
          - type: ndcg_at_10
            value: 9.81258333333333
          - type: ndcg_at_100
            value: 12.459416666666666
          - type: ndcg_at_1000
            value: 15.099416666666668
          - type: ndcg_at_3
            value: 8.177499999999998
          - type: ndcg_at_5
            value: 8.8765
          - type: precision_at_1
            value: 6.6329166666666675
          - type: precision_at_10
            value: 1.8355833333333336
          - type: precision_at_100
            value: 0.38033333333333336
          - type: precision_at_1000
            value: 0.07358333333333333
          - type: precision_at_3
            value: 3.912583333333333
          - type: precision_at_5
            value: 2.8570833333333336
          - type: recall_at_1
            value: 5.441
          - type: recall_at_10
            value: 13.79075
          - type: recall_at_100
            value: 26.12841666666667
          - type: recall_at_1000
            value: 46.1115
          - type: recall_at_3
            value: 9.212416666666666
          - type: recall_at_5
            value: 11.006499999999999
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackStatsRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 4.973000000000001
          - type: map_at_10
            value: 6.583
          - type: map_at_100
            value: 7.013999999999999
          - type: map_at_1000
            value: 7.084
          - type: map_at_3
            value: 5.987
          - type: map_at_5
            value: 6.283999999999999
          - type: mrr_at_1
            value: 6.135
          - type: mrr_at_10
            value: 7.911
          - type: mrr_at_100
            value: 8.381
          - type: mrr_at_1000
            value: 8.451
          - type: mrr_at_3
            value: 7.234
          - type: mrr_at_5
            value: 7.595000000000001
          - type: ndcg_at_1
            value: 6.135
          - type: ndcg_at_10
            value: 7.8420000000000005
          - type: ndcg_at_100
            value: 10.335999999999999
          - type: ndcg_at_1000
            value: 12.742999999999999
          - type: ndcg_at_3
            value: 6.622
          - type: ndcg_at_5
            value: 7.156
          - type: precision_at_1
            value: 6.135
          - type: precision_at_10
            value: 1.3339999999999999
          - type: precision_at_100
            value: 0.293
          - type: precision_at_1000
            value: 0.053
          - type: precision_at_3
            value: 2.965
          - type: precision_at_5
            value: 2.086
          - type: recall_at_1
            value: 4.973000000000001
          - type: recall_at_10
            value: 10.497
          - type: recall_at_100
            value: 22.389
          - type: recall_at_1000
            value: 41.751
          - type: recall_at_3
            value: 7.248
          - type: recall_at_5
            value: 8.526
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackTexRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 2.541
          - type: map_at_10
            value: 4.168
          - type: map_at_100
            value: 4.492
          - type: map_at_1000
            value: 4.553
          - type: map_at_3
            value: 3.62
          - type: map_at_5
            value: 3.927
          - type: mrr_at_1
            value: 3.131
          - type: mrr_at_10
            value: 5.037
          - type: mrr_at_100
            value: 5.428
          - type: mrr_at_1000
            value: 5.487
          - type: mrr_at_3
            value: 4.422000000000001
          - type: mrr_at_5
            value: 4.752
          - type: ndcg_at_1
            value: 3.131
          - type: ndcg_at_10
            value: 5.315
          - type: ndcg_at_100
            value: 7.207
          - type: ndcg_at_1000
            value: 9.271
          - type: ndcg_at_3
            value: 4.244
          - type: ndcg_at_5
            value: 4.742
          - type: precision_at_1
            value: 3.131
          - type: precision_at_10
            value: 1.0699999999999998
          - type: precision_at_100
            value: 0.247
          - type: precision_at_1000
            value: 0.053
          - type: precision_at_3
            value: 2.1340000000000003
          - type: precision_at_5
            value: 1.624
          - type: recall_at_1
            value: 2.541
          - type: recall_at_10
            value: 7.8740000000000006
          - type: recall_at_100
            value: 16.896
          - type: recall_at_1000
            value: 32.423
          - type: recall_at_3
            value: 4.925
          - type: recall_at_5
            value: 6.181
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackUnixRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 5.58
          - type: map_at_10
            value: 7.758
          - type: map_at_100
            value: 8.168000000000001
          - type: map_at_1000
            value: 8.239
          - type: map_at_3
            value: 6.895999999999999
          - type: map_at_5
            value: 7.412000000000001
          - type: mrr_at_1
            value: 6.81
          - type: mrr_at_10
            value: 9.295
          - type: mrr_at_100
            value: 9.763
          - type: mrr_at_1000
            value: 9.835
          - type: mrr_at_3
            value: 8.427
          - type: mrr_at_5
            value: 8.958
          - type: ndcg_at_1
            value: 6.81
          - type: ndcg_at_10
            value: 9.436
          - type: ndcg_at_100
            value: 11.955
          - type: ndcg_at_1000
            value: 14.387
          - type: ndcg_at_3
            value: 7.7410000000000005
          - type: ndcg_at_5
            value: 8.622
          - type: precision_at_1
            value: 6.81
          - type: precision_at_10
            value: 1.6230000000000002
          - type: precision_at_100
            value: 0.335
          - type: precision_at_1000
            value: 0.062
          - type: precision_at_3
            value: 3.576
          - type: precision_at_5
            value: 2.6870000000000003
          - type: recall_at_1
            value: 5.58
          - type: recall_at_10
            value: 13.232
          - type: recall_at_100
            value: 25.233
          - type: recall_at_1000
            value: 43.864999999999995
          - type: recall_at_3
            value: 8.549
          - type: recall_at_5
            value: 10.799
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackWebmastersRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 3.8739999999999997
          - type: map_at_10
            value: 6.491
          - type: map_at_100
            value: 7.065
          - type: map_at_1000
            value: 7.185
          - type: map_at_3
            value: 5.568
          - type: map_at_5
            value: 6.1080000000000005
          - type: mrr_at_1
            value: 5.335999999999999
          - type: mrr_at_10
            value: 8.288
          - type: mrr_at_100
            value: 8.886
          - type: mrr_at_1000
            value: 8.976
          - type: mrr_at_3
            value: 7.115
          - type: mrr_at_5
            value: 7.846
          - type: ndcg_at_1
            value: 5.335999999999999
          - type: ndcg_at_10
            value: 8.463
          - type: ndcg_at_100
            value: 11.456
          - type: ndcg_at_1000
            value: 14.662
          - type: ndcg_at_3
            value: 6.7589999999999995
          - type: ndcg_at_5
            value: 7.5969999999999995
          - type: precision_at_1
            value: 5.335999999999999
          - type: precision_at_10
            value: 1.9369999999999998
          - type: precision_at_100
            value: 0.498
          - type: precision_at_1000
            value: 0.116
          - type: precision_at_3
            value: 3.689
          - type: precision_at_5
            value: 2.9250000000000003
          - type: recall_at_1
            value: 3.8739999999999997
          - type: recall_at_10
            value: 12.281
          - type: recall_at_100
            value: 26.368000000000002
          - type: recall_at_1000
            value: 50.422
          - type: recall_at_3
            value: 7.353
          - type: recall_at_5
            value: 9.66
      - task:
          type: Retrieval
        dataset:
          type: BeIR/cqadupstack
          name: MTEB CQADupstackWordpressRetrieval
          config: default
          split: test
          revision: None
        metrics:
          - type: map_at_1
            value: 2.317
          - type: map_at_10
            value: 3.697
          - type: map_at_100
            value: 3.9370000000000003
          - type: map_at_1000
            value: 3.994
          - type: map_at_3
            value: 3.1329999999999996
          - type: map_at_5
            value: 3.45
          - type: mrr_at_1
            value: 2.588
          - type: mrr_at_10
            value: 4.168
          - type: mrr_at_100
            value: 4.421
          - type: mrr_at_1000
            value: 4.481
          - type: mrr_at_3
            value: 3.512
          - type: mrr_at_5
            value: 3.909
          - type: ndcg_at_1
            value: 2.588
          - type: ndcg_at_10
            value: 4.679
          - type: ndcg_at_100
            value: 6.114
          - type: ndcg_at_1000
            value: 8.167
          - type: ndcg_at_3
            value: 3.5290000000000004
          - type: ndcg_at_5
            value: 4.093999999999999
          - type: precision_at_1
            value: 2.588
          - type: precision_at_10
            value: 0.832
          - type: precision_at_100
            value: 0.165
          - type: precision_at_1000
            value: 0.037
          - type: precision_at_3
            value: 1.6019999999999999
          - type: precision_at_5
            value: 1.294
          - type: recall_at_1
            value: 2.317
          - type: recall_at_10
            value: 7.338
          - type: recall_at_100
            value: 14.507
          - type: recall_at_1000
            value: 31.226
          - type: recall_at_3
            value: 4.258
          - type: recall_at_5
            value: 5.582
      - task:
          type: Classification
        dataset:
          type: mteb/emotion
          name: MTEB EmotionClassification
          config: default
          split: test
          revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
        metrics:
          - type: accuracy
            value: 33.535
          - type: f1
            value: 29.64261331714107
      - task:
          type: Classification
        dataset:
          type: mteb/imdb
          name: MTEB ImdbClassification
          config: default
          split: test
          revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
        metrics:
          - type: accuracy
            value: 57.03359999999999
          - type: ap
            value: 54.289515246345985
          - type: f1
            value: 56.404319444675686
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_domain
          name: MTEB MTOPDomainClassification (en)
          config: en
          split: test
          revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
        metrics:
          - type: accuracy
            value: 86.70770633834928
          - type: f1
            value: 86.3521440956975
      - task:
          type: Classification
        dataset:
          type: mteb/mtop_intent
          name: MTEB MTOPIntentClassification (en)
          config: en
          split: test
          revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
        metrics:
          - type: accuracy
            value: 62.35750113999089
          - type: f1
            value: 41.01929492285308
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_intent
          name: MTEB MassiveIntentClassification (en)
          config: en
          split: test
          revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
        metrics:
          - type: accuracy
            value: 63.34902488231339
          - type: f1
            value: 59.90320313789715
      - task:
          type: Classification
        dataset:
          type: mteb/amazon_massive_scenario
          name: MTEB MassiveScenarioClassification (en)
          config: en
          split: test
          revision: 7d571f92784cd94a019292a1f45445077d0ef634
        metrics:
          - type: accuracy
            value: 72.51513113651649
          - type: f1
            value: 72.02695487206958
      - task:
          type: Clustering
        dataset:
          type: mteb/medrxiv-clustering-p2p
          name: MTEB MedrxivClusteringP2P
          config: default
          split: test
          revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
        metrics:
          - type: v_measure
            value: 27.274796122083107
      - task:
          type: Clustering
        dataset:
          type: mteb/medrxiv-clustering-s2s
          name: MTEB MedrxivClusteringS2S
          config: default
          split: test
          revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
        metrics:
          - type: v_measure
            value: 26.79725352760558
      - task:
          type: Reranking
        dataset:
          type: mteb/mind_small
          name: MTEB MindSmallReranking
          config: default
          split: test
          revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
        metrics:
          - type: map
            value: 26.13036834909186
          - type: mrr
            value: 26.44693141383913
      - task:
          type: Clustering
        dataset:
          type: mteb/reddit-clustering
          name: MTEB RedditClustering
          config: default
          split: test
          revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
        metrics:
          - type: v_measure
            value: 42.20822777687787
      - task:
          type: Clustering
        dataset:
          type: mteb/reddit-clustering-p2p
          name: MTEB RedditClusteringP2P
          config: default
          split: test
          revision: 282350215ef01743dc01b456c7f5241fa8937f16
        metrics:
          - type: v_measure
            value: 50.46829369249206
      - task:
          type: STS
        dataset:
          type: mteb/sickr-sts
          name: MTEB SICK-R
          config: default
          split: test
          revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
        metrics:
          - type: cos_sim_pearson
            value: 68.56822847816088
          - type: cos_sim_spearman
            value: 67.89762106712074
          - type: euclidean_pearson
            value: 72.85990051290051
          - type: euclidean_spearman
            value: 70.57485701927138
          - type: manhattan_pearson
            value: 75.55042864114424
          - type: manhattan_spearman
            value: 71.93915751894929
      - task:
          type: STS
        dataset:
          type: mteb/sts12-sts
          name: MTEB STS12
          config: default
          split: test
          revision: a0d554a64d88156834ff5ae9920b964011b16384
        metrics:
          - type: cos_sim_pearson
            value: 75.78267692127127
          - type: cos_sim_spearman
            value: 72.29619737860627
          - type: euclidean_pearson
            value: 70.1450545025718
          - type: euclidean_spearman
            value: 67.45917489688871
          - type: manhattan_pearson
            value: 71.38506807589515
          - type: manhattan_spearman
            value: 67.2756870294321
      - task:
          type: STS
        dataset:
          type: mteb/sts13-sts
          name: MTEB STS13
          config: default
          split: test
          revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
        metrics:
          - type: cos_sim_pearson
            value: 58.319097196559234
          - type: cos_sim_spearman
            value: 64.92943196450905
          - type: euclidean_pearson
            value: 66.58719740666398
          - type: euclidean_spearman
            value: 67.53564380155727
          - type: manhattan_pearson
            value: 68.40736205376945
          - type: manhattan_spearman
            value: 68.83617823881784
      - task:
          type: STS
        dataset:
          type: mteb/sts14-sts
          name: MTEB STS14
          config: default
          split: test
          revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
        metrics:
          - type: cos_sim_pearson
            value: 53.31328133696752
          - type: cos_sim_spearman
            value: 54.95348091071938
          - type: euclidean_pearson
            value: 62.387046499499476
          - type: euclidean_spearman
            value: 61.1353898211832
          - type: manhattan_pearson
            value: 65.6417443455959
          - type: manhattan_spearman
            value: 63.242670107784384
      - task:
          type: STS
        dataset:
          type: mteb/sts15-sts
          name: MTEB STS15
          config: default
          split: test
          revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
        metrics:
          - type: cos_sim_pearson
            value: 60.528757851414014
          - type: cos_sim_spearman
            value: 64.23576213334218
          - type: euclidean_pearson
            value: 72.97957845156205
          - type: euclidean_spearman
            value: 73.65719038687413
          - type: manhattan_pearson
            value: 74.78225875672878
          - type: manhattan_spearman
            value: 75.49116886100272
      - task:
          type: STS
        dataset:
          type: mteb/sts16-sts
          name: MTEB STS16
          config: default
          split: test
          revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
        metrics:
          - type: cos_sim_pearson
            value: 61.987373739107696
          - type: cos_sim_spearman
            value: 70.192277875975
          - type: euclidean_pearson
            value: 72.63709361494375
          - type: euclidean_spearman
            value: 73.11242796462018
          - type: manhattan_pearson
            value: 73.72926634930128
          - type: manhattan_spearman
            value: 73.98477033865957
      - task:
          type: STS
        dataset:
          type: mteb/sts17-crosslingual-sts
          name: MTEB STS17 (en-en)
          config: en-en
          split: test
          revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
        metrics:
          - type: cos_sim_pearson
            value: 70.04069064325459
          - type: cos_sim_spearman
            value: 74.38400000348688
          - type: euclidean_pearson
            value: 82.08401389635375
          - type: euclidean_spearman
            value: 81.95480539585296
          - type: manhattan_pearson
            value: 84.99052315893229
          - type: manhattan_spearman
            value: 84.66072647748268
      - task:
          type: STS
        dataset:
          type: mteb/sts22-crosslingual-sts
          name: MTEB STS22 (en)
          config: en
          split: test
          revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
        metrics:
          - type: cos_sim_pearson
            value: 48.154362861306986
          - type: cos_sim_spearman
            value: 48.58749841932341
          - type: euclidean_pearson
            value: 50.41642902043279
          - type: euclidean_spearman
            value: 51.371094727414935
          - type: manhattan_pearson
            value: 53.06081362594791
          - type: manhattan_spearman
            value: 52.92177971301313
      - task:
          type: STS
        dataset:
          type: mteb/stsbenchmark-sts
          name: MTEB STSBenchmark
          config: default
          split: test
          revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
        metrics:
          - type: cos_sim_pearson
            value: 59.11445268439452
          - type: cos_sim_spearman
            value: 61.46376153396639
          - type: euclidean_pearson
            value: 70.4367704900615
          - type: euclidean_spearman
            value: 69.71716383694748
          - type: manhattan_pearson
            value: 72.72973072359753
          - type: manhattan_spearman
            value: 71.48785771698903
      - task:
          type: Reranking
        dataset:
          type: mteb/scidocs-reranking
          name: MTEB SciDocsRR
          config: default
          split: test
          revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
        metrics:
          - type: map
            value: 69.56970649232905
          - type: mrr
            value: 89.47439089595952
      - task:
          type: PairClassification
        dataset:
          type: mteb/sprintduplicatequestions-pairclassification
          name: MTEB SprintDuplicateQuestions
          config: default
          split: test
          revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
        metrics:
          - type: cos_sim_accuracy
            value: 99.6009900990099
          - type: cos_sim_ap
            value: 85.25193332879603
          - type: cos_sim_f1
            value: 78.88563049853373
          - type: cos_sim_precision
            value: 77.151051625239
          - type: cos_sim_recall
            value: 80.7
          - type: dot_accuracy
            value: 99.01287128712872
          - type: dot_ap
            value: 7.20643686800152
          - type: dot_f1
            value: 14.143920595533496
          - type: dot_precision
            value: 9.405940594059405
          - type: dot_recall
            value: 28.499999999999996
          - type: euclidean_accuracy
            value: 99.590099009901
          - type: euclidean_ap
            value: 83.37987878104964
          - type: euclidean_f1
            value: 78.22990844354018
          - type: euclidean_precision
            value: 79.60662525879917
          - type: euclidean_recall
            value: 76.9
          - type: manhattan_accuracy
            value: 99.609900990099
          - type: manhattan_ap
            value: 85.6481020725528
          - type: manhattan_f1
            value: 79.23790913531998
          - type: manhattan_precision
            value: 77.45940783190068
          - type: manhattan_recall
            value: 81.10000000000001
          - type: max_accuracy
            value: 99.609900990099
          - type: max_ap
            value: 85.6481020725528
          - type: max_f1
            value: 79.23790913531998
      - task:
          type: Clustering
        dataset:
          type: mteb/stackexchange-clustering
          name: MTEB StackExchangeClustering
          config: default
          split: test
          revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
        metrics:
          - type: v_measure
            value: 51.49824324480644
      - task:
          type: Clustering
        dataset:
          type: mteb/stackexchange-clustering-p2p
          name: MTEB StackExchangeClusteringP2P
          config: default
          split: test
          revision: 815ca46b2622cec33ccafc3735d572c266efdb44
        metrics:
          - type: v_measure
            value: 29.27365407025942
      - task:
          type: Reranking
        dataset:
          type: mteb/stackoverflowdupquestions-reranking
          name: MTEB StackOverflowDupQuestions
          config: default
          split: test
          revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
        metrics:
          - type: map
            value: 37.62142967031895
          - type: mrr
            value: 37.80931690858162
      - task:
          type: Summarization
        dataset:
          type: mteb/summeval
          name: MTEB SummEval
          config: default
          split: test
          revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
        metrics:
          - type: cos_sim_pearson
            value: 27.279280594311935
          - type: cos_sim_spearman
            value: 28.055012324260563
          - type: dot_pearson
            value: 19.315154386546453
          - type: dot_spearman
            value: 19.17304603866006
      - task:
          type: Classification
        dataset:
          type: mteb/toxic_conversations_50k
          name: MTEB ToxicConversationsClassification
          config: default
          split: test
          revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
        metrics:
          - type: accuracy
            value: 63.2888
          - type: ap
            value: 11.062527367094436
          - type: f1
            value: 48.6893658037416
      - task:
          type: Classification
        dataset:
          type: mteb/tweet_sentiment_extraction
          name: MTEB TweetSentimentExtractionClassification
          config: default
          split: test
          revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
        metrics:
          - type: accuracy
            value: 49.275608375778155
          - type: f1
            value: 49.487704374827324
      - task:
          type: Clustering
        dataset:
          type: mteb/twentynewsgroups-clustering
          name: MTEB TwentyNewsgroupsClustering
          config: default
          split: test
          revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
        metrics:
          - type: v_measure
            value: 37.31132794113957
      - task:
          type: PairClassification
        dataset:
          type: mteb/twittersemeval2015-pairclassification
          name: MTEB TwitterSemEval2015
          config: default
          split: test
          revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
        metrics:
          - type: cos_sim_accuracy
            value: 80.008344757704
          - type: cos_sim_ap
            value: 50.955726976655036
          - type: cos_sim_f1
            value: 49.800796812749006
          - type: cos_sim_precision
            value: 42.8898208158597
          - type: cos_sim_recall
            value: 59.36675461741425
          - type: dot_accuracy
            value: 77.42743041068128
          - type: dot_ap
            value: 19.216239898966027
          - type: dot_f1
            value: 36.95323548056761
          - type: dot_precision
            value: 22.665550038882575
          - type: dot_recall
            value: 99.9736147757256
          - type: euclidean_accuracy
            value: 81.12296596530965
          - type: euclidean_ap
            value: 55.99371814327642
          - type: euclidean_f1
            value: 54.55376528396755
          - type: euclidean_precision
            value: 48.11529933481153
          - type: euclidean_recall
            value: 62.98153034300792
          - type: manhattan_accuracy
            value: 81.3673481552125
          - type: manhattan_ap
            value: 57.126538198748456
          - type: manhattan_f1
            value: 55.38567651454189
          - type: manhattan_precision
            value: 49.073130983907106
          - type: manhattan_recall
            value: 63.562005277044854
          - type: max_accuracy
            value: 81.3673481552125
          - type: max_ap
            value: 57.126538198748456
          - type: max_f1
            value: 55.38567651454189
      - task:
          type: PairClassification
        dataset:
          type: mteb/twitterurlcorpus-pairclassification
          name: MTEB TwitterURLCorpus
          config: default
          split: test
          revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
        metrics:
          - type: cos_sim_accuracy
            value: 84.02607986960065
          - type: cos_sim_ap
            value: 74.07757228336027
          - type: cos_sim_f1
            value: 66.0694778239021
          - type: cos_sim_precision
            value: 62.67790520934089
          - type: cos_sim_recall
            value: 69.84909146904835
          - type: dot_accuracy
            value: 74.79722125198897
          - type: dot_ap
            value: 25.478024888904727
          - type: dot_f1
            value: 40.76642277589147
          - type: dot_precision
            value: 25.705095989546688
          - type: dot_recall
            value: 98.45241761626117
          - type: euclidean_accuracy
            value: 85.51053673303062
          - type: euclidean_ap
            value: 78.24178926488659
          - type: euclidean_f1
            value: 70.50944224857267
          - type: euclidean_precision
            value: 67.19447544642857
          - type: euclidean_recall
            value: 74.16846319679703
          - type: manhattan_accuracy
            value: 85.72398804672643
          - type: manhattan_ap
            value: 78.90411073933831
          - type: manhattan_f1
            value: 70.90586145648314
          - type: manhattan_precision
            value: 65.8224508640021
          - type: manhattan_recall
            value: 76.84016014782877
          - type: max_accuracy
            value: 85.72398804672643
          - type: max_ap
            value: 78.90411073933831
          - type: max_f1
            value: 70.90586145648314

clip-ViT-B-32

This is the Image & Text model CLIP, which maps text and images to a shared vector space. For applications of the models, have a look in our documentation SBERT.net - Image Search

Usage

After installing sentence-transformers (pip install sentence-transformers), the usage of this model is easy:

from sentence_transformers import SentenceTransformer, util
from PIL import Image

#Load CLIP model
model = SentenceTransformer('clip-ViT-B-32')

#Encode an image:
img_emb = model.encode(Image.open('two_dogs_in_snow.jpg'))

#Encode text descriptions
text_emb = model.encode(['Two dogs in the snow', 'A cat on a table', 'A picture of London at night'])

#Compute cosine similarities 
cos_scores = util.cos_sim(img_emb, text_emb)
print(cos_scores)

See our SBERT.net - Image Search documentation for more examples how the model can be used for image search, zero-shot image classification, image clustering and image deduplication.

Performance

In the following table we find the zero-shot ImageNet validation set accuracy:

Model Top 1 Performance
clip-ViT-B-32 63.3
clip-ViT-B-16 68.1
clip-ViT-L-14 75.4

For a multilingual version of the CLIP model for 50+ languages have a look at: clip-ViT-B-32-multilingual-v1