Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
99ea1f2
·
verified ·
1 Parent(s): d3b8425

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -79,5 +79,19 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
79
  |Loss function |CLIP loss |
80
  |Multi-layer loss | yes |
81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  ## Licence
83
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
 
79
  |Loss function |CLIP loss |
80
  |Multi-layer loss | yes |
81
 
82
+ ### Evaluation
83
+
84
+ Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
85
+
86
+ | Layer | Avg. MRR |
87
+ |--------------------------|-----------|
88
+ | [Layer 4](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-4) | 73.2 |
89
+ | [Layer 9](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-9) | 77.3 |
90
+ | [Layer 18](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-18)* | 81.0 |
91
+ | [Layer 27](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-27) | 80.3 |
92
+ | [Layer 36](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned) | 79.6 |
93
+
94
+ - (* size and corresponding projection head present in this model)
95
+
96
  ## Licence
97
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).