Feature Extraction
Transformers
Safetensors
ModularStarEncoder
custom_code
andreagurioli1995 commited on
Commit
2071bd2
·
verified ·
1 Parent(s): cc7a7a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -15,9 +15,14 @@ base_model:
15
  ModularStarEncoder-finetuned-18 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
  ModularStarEncoder fine-tuned-18 is an encoder for various retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
 
18
  This version contains only the first 18 layers of ModularStarEncoder-finetuned, with the related projection head.
19
  We have released this version to enhance the model's usability by allowing users to download only the desired size.
20
- The model is finetuned with [CLIP objective](https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/loss.py)
 
 
 
 
21
 
22
  - **Paper:** [Link](arxiv.paper)
23
  - **Languages:** English, Go, Ruby, Python, Java, C++, PHP, C, JavaScript
 
15
  ModularStarEncoder-finetuned-18 is an encoder built on top of [ModularStarEncoder-1B Pre-trained](https://huggingface.co/andreagurioli1995/ModularStarEncoder) on [SynthCode2Code2NL](https://huggingface.co/datasets/andreagurioli1995/SynthCode2Code2NL-neardedup).
16
  ModularStarEncoder fine-tuned-18 is an encoder for various retrieval tasks, enabling the end user to select the model size that meets their memory and computational constraints.
17
  We built ModularStarEncoder on top of [StarCoder-2](https://huggingface.co/bigcode/starcoder2-15b), reducing its size from 15B to 1B parameters in bfloat16.
18
+
19
  This version contains only the first 18 layers of ModularStarEncoder-finetuned, with the related projection head.
20
  We have released this version to enhance the model's usability by allowing users to download only the desired size.
21
+
22
+ The model is finetuned with [CLIP objective](https://github.com/mlfoundations/open_clip/blob/main/src/open_clip/loss.py).
23
+
24
+ ModularStarEncoder fine-tuned works with instruction prompts; to get the most out of the model, embed the task in the input. The How to Use section below provides more details.
25
+
26
 
27
  - **Paper:** [Link](arxiv.paper)
28
  - **Languages:** English, Go, Ruby, Python, Java, C++, PHP, C, JavaScript