Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
0fa8d8d
·
verified ·
1 Parent(s): aa7347f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -78,5 +78,20 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
78
  |Loss function |CLIP loss |
79
  |Multi-layer loss | yes |
80
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
  ## Licence
82
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
 
78
  |Loss function |CLIP loss |
79
  |Multi-layer loss | yes |
80
 
81
+
82
+ ### Evaluation
83
+
84
+ Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
85
+
86
+ | Layer | Avg. MRR |
87
+ |--------------------------|-----------|
88
+ | [Layer 4](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-4)* | 73.2 |
89
+ | [Layer 9](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-9)* | 77.3 |
90
+ | [Layer 18](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-18)* | 81.0 |
91
+ | [Layer 27](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-27)* | 80.3 |
92
+ | [Layer 36](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned)* | 79.6 |
93
+
94
+ * size and corresponding projection head present in this model
95
+
96
  ## Licence
97
  The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).