Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
1e39cf5
·
verified ·
1 Parent(s): 695e644

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -13,7 +13,7 @@ base_model:
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
- ModularStarEncoder, fine-tuned, is an encoder for code-to-code and nl-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  The model is finetuned with [CLIP objective](https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/loss.py).
@@ -83,7 +83,7 @@ The pre-training and fine-tuning were conducted on 512 NVIDIA Ampere (64GB) GPUs
83
 
84
  ### Evaluation
85
 
86
- Here we briefly show our codeSearchNet (codeXGLUE) results between different layers:
87
 
88
  | Layer | Avg. MRR |
89
  |--------------------------|-----------|
 
13
  <!-- Provide a quick summary of what the model is/does. -->
14
 
15
  ModularStarEncoder-finetuned is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
+ ModularStarEncoder, fine-tuned, is an encoder for code-to-code and text-to-code retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
 
19
  The model is finetuned with [CLIP objective](https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/loss.py).
 
83
 
84
  ### Evaluation
85
 
86
+ Here we briefly show our codeSearchNet (codeXGLUE) results between different layers; for full results over text-to-code and code-to-code refer to the article:
87
 
88
  | Layer | Avg. MRR |
89
  |--------------------------|-----------|