Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
4f0c1c9
·
verified ·
1 Parent(s): dcb6ee2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -13,7 +13,7 @@ base_model:
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned-27 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
- ModularStarEncoder fine-tuned-27 is an encoder for code-to-code and nl-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  This version contains only the first 27 layers of ModularStarEncoder-finetuned, with the related projection head.
@@ -85,8 +85,7 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
85
 
86
  ### Evaluation
87
 
88
- Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
89
-
90
  | Layer | Avg. MRR |
91
  |--------------------------|-----------|
92
  | [Layer 4](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-4) | 73.2 |
 
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned-27 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
+ ModularStarEncoder fine-tuned-27 is an encoder for code-to-code and text-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  This version contains only the first 27 layers of ModularStarEncoder-finetuned, with the related projection head.
 
85
 
86
  ### Evaluation
87
 
88
+ Here we briefly show our codeSearchNet (codeXGLUE) results between different layers; for full results over text-to-code and code-to-code refer to the article:
 
89
  | Layer | Avg. MRR |
90
  |--------------------------|-----------|
91
  | [Layer 4](https://huggingface.co/modularStarEncoder/ModularStarEncoder-finetuned-4) | 73.2 |