Update README.md
Browse files
README.md
CHANGED
@@ -79,5 +79,19 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
|
|
79 |
|Loss function |CLIP loss |
|
80 |
|Multi-layer loss | yes |
|
81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
## Licence
|
83 |
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
|
|
79 |
|Loss function |CLIP loss |
|
80 |
|Multi-layer loss | yes |
|
81 |
|
82 |
+
### Evaluation
|
83 |
+
|
84 |
+
Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
|
85 |
+
|
86 |
+
| Layer | Avg. MRR |
|
87 |
+
|--------------------------|-----------|
|
88 |
+
| [Layer 4](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-4) | 73.2 |
|
89 |
+
| [Layer 9](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-9) | 77.3 |
|
90 |
+
| [Layer 18](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-18)* | 81.0 |
|
91 |
+
| [Layer 27](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-27) | 80.3 |
|
92 |
+
| [Layer 36](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned) | 79.6 |
|
93 |
+
|
94 |
+
- (* size and corresponding projection head present in this model)
|
95 |
+
|
96 |
## Licence
|
97 |
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|