Update README.md
Browse files
README.md
CHANGED
@@ -100,4 +100,18 @@ Here we briefly show our codeSearchNet (codeXGLUE) results between different lay
|
|
100 |
- (* size and corresponding projection head present in this model)
|
101 |
|
102 |
## Licence
|
103 |
-
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
- (* size and corresponding projection head present in this model)
|
101 |
|
102 |
## Licence
|
103 |
+
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
104 |
+
|
105 |
+
|
106 |
+
# Citation
|
107 |
+
```
|
108 |
+
@article{gurioli2025modeltrainallhierarchical,
|
109 |
+
title={One Model to Train them All: Hierarchical Self-Distillation for Enhanced Early Layer Embeddings},
|
110 |
+
author={Andrea Gurioli and Federico Pennino and João Monteiro and Maurizio Gabbrielli},
|
111 |
+
year={2025},
|
112 |
+
eprint={2503.03008},
|
113 |
+
archivePrefix={arXiv},
|
114 |
+
primaryClass={cs.CL},
|
115 |
+
url={https://arxiv.org/abs/2503.03008},
|
116 |
+
}
|
117 |
+
```
|