Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
6f8dcab
·
verified ·
1 Parent(s): 7f315b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,7 +13,7 @@ base_model:
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned-18 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
- ModularStarEncoder fine-tuned-18 is an encoder for code-to-code and nl-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  This version contains only the first 18 layers of ModularStarEncoder-finetuned, with the related projection head.
@@ -87,7 +87,7 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
87
 
88
  ### Evaluation
89
 
90
- Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
91
 
92
  | Layer | Avg. MRR |
93
  |--------------------------|-----------|
 
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned-18 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
+ ModularStarEncoder fine-tuned-18 is an encoder for code-to-code and text-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  This version contains only the first 18 layers of ModularStarEncoder-finetuned, with the related projection head.
 
87
 
88
  ### Evaluation
89
 
90
+ Here we briefly show our codeSearchNet (codeXGLUE) results between different layers; for full results over text-to-code and code-to-code refer to the article:
91
 
92
  | Layer | Avg. MRR |
93
  |--------------------------|-----------|