Atefeh Sohrabizadeh commited on
Commit
b0c0d8f
·
1 Parent(s): 4b8390f

updated readme

Browse files
Files changed (1) hide show
  1. README.md +62 -52
README.md CHANGED
@@ -1,77 +1,78 @@
1
- ---
2
- language: []
3
- library_name: sentence-transformers
4
- pipeline_tag: sentence-similarity
5
- tags:
6
- - sentence-transformers
7
- - sentence-similarity
8
- - feature-extraction
9
- widget: []
10
- ---
11
 
12
- # SentenceTransformer
13
 
14
- This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 4096-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
 
 
 
 
 
 
 
 
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
- - **Model Type:** Sentence Transformer
20
- <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
21
- - **Maximum Sequence Length:** 4096 tokens
22
- - **Output Dimensionality:** 4096 tokens
23
- - **Similarity Function:** Cosine Similarity
24
- <!-- - **Training Dataset:** Unknown -->
25
- <!-- - **Language:** Unknown -->
26
- <!-- - **License:** Unknown -->
27
 
28
- ### Model Sources
 
 
29
 
30
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
31
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
32
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
 
 
 
 
33
 
34
- ### Full Model Architecture
35
-
36
- ```
37
- SentenceTransformer(
38
- (0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: MistralBiDirectionalModel
39
- (1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
40
- )
41
- ```
42
 
43
  ## Usage
44
 
45
  ### Direct Usage (Sentence Transformers)
46
 
47
- First install the Sentence Transformers library:
48
 
49
  ```bash
50
- pip install -U sentence-transformers
51
  ```
52
 
53
  Then you can load this model and run inference.
54
  ```python
55
- from sentence_transformers import SentenceTransformer
56
-
57
- # Download from the 🤗 Hub
58
- model = SentenceTransformer("sentence_transformers_model_id")
59
- # Run inference
60
- sentences = [
61
- 'The weather is lovely today.',
62
- "It's so sunny outside!",
63
- 'He drove to the stadium.',
 
 
 
 
 
 
64
  ]
65
- embeddings = model.encode(sentences)
66
- print(embeddings.shape)
67
- # [3, 4096]
68
-
69
- # Get the similarity scores for the embeddings
70
- similarities = model.similarity(embeddings, embeddings)
71
- print(similarities.shape)
72
- # [3, 3]
 
 
73
  ```
74
 
 
75
  <!--
76
  ### Direct Usage (Transformers)
77
 
@@ -120,8 +121,17 @@ You can finetune this model on your own dataset.
120
  - Tokenizers: 0.15.2
121
 
122
  ## Citation
 
123
 
124
- ### BibTeX
 
 
 
 
 
 
 
 
125
 
126
  <!--
127
  ## Glossary
 
1
+ ## Introduction
 
 
 
 
 
 
 
 
 
2
 
3
+ The NV-EmbedCode model is a 7B Mistral-based embedding model optimized for code retrieval, supporting text, code, and hybrid queries.
4
 
5
+ Code retrieval is a critical task in many domains including coding assistance, code explanation, summarization, and documentation search. NV-EmbedCode transforms the input code or textual data into dense vector representations, known as embeddings, enabling effective retrieval and search.
6
+
7
+ For technical details, refer to our paper: [Nemotron-CORTEXA: Enhancing LLM Agents for Software Engineering Tasks via Improved Localization and Solution Diversity](https://openreview.net/forum?id=k6p8UKRdH7)
8
+
9
+ ## Intended use
10
+ The NV-EmbedCode model is most suitable for users who want to build a code retrieval system over a large text or code corpus, leveraging the latest dense retrieval technologies. <br>
11
+
12
+ ### License/Terms of Use
13
+ The use of this model is governed by the [NVIDIA AI Foundation Models Community License Agreement](https://developer.nvidia.com/downloads/nv-ai-foundation-models-license) and the [Apache License 2.0](https://choosealicense.com/licenses/apache-2.0/).
14
 
15
  ## Model Details
16
 
17
  ### Model Description
18
+ - **Base model:** [NVIDIA Retrieval QA Mistral 7B Embedding model](https://build.nvidia.com/nvidia/nv-embedqa-mistral-7b-v2/modelcard)
19
+ - **Embedding dimension:** 4096
20
+ - **Pooling mode:** mean tokens
 
 
 
 
 
21
 
22
+ ## Evaluation Results:
23
+ We evaluated NV-EmbedCode model using the [CoIR benchmark](https://arxiv.org/html/2407.02883v1) and a curated set based on [SWE-bench](https://arxiv.org/abs/2310.06770). CoIR consists of 10 code datasets across four retrieval tasks: (1) Text-to-Code Retrieval, (2) Code-to-Code Retrieval, (3) Code-to-Text Retrieval, and (4) Hybrid Code Retrieval. The default evaluation metric for CoIR is average NDCG@10 across all datasets. SWE-bench originally consists of real-world software engineering problems from GitHub issues and their corresponding pull requests. We adapted it into a retrieval task, where the goal is to identify the files that need to be edited to resolve an issue. These files are identified using the pull request that solved the issue. For SWE-bench Lite, we use Recall@1 to measure whether the top retrieved file is the correct one for resolving the issue, as each instance typically involves editing just one file. For more detailed evaluation results, please refer to [our paper](https://openreview.net/forum?id=k6p8UKRdH7).
24
+ <br>
25
 
26
+ | Retrieval Method | CoIR Main Score (NDCG@10) | SWE-bench Lite (Recall@1) |
27
+ |:------------|:---------------:|-------------:|
28
+ | NV-EmbedCode | 72.45% | 70.33% |
29
+ | NV-EmbedQA-Mistral-7B-v2 | 60.08% | 61.33% |
30
+ | SFR-Embedding-Code-2B_R | 67.41% | 47.00% |
31
+ | SFR-Mistral-2_R | 61.85% | 60.33% |
32
+ | BM25 | - | 42.33% |
33
 
 
 
 
 
 
 
 
 
34
 
35
  ## Usage
36
 
37
  ### Direct Usage (Sentence Transformers)
38
 
39
+ First install the following libraries:
40
 
41
  ```bash
42
+ pip install transformers==4.37.2 sentence_transformers
43
  ```
44
 
45
  Then you can load this model and run inference.
46
  ```python
47
+ from sentence_transformers import SentenceTransformer, util
48
+
49
+ task_name_to_instruct = {
50
+ "general": "Retrieve code or text based on user query",
51
+ "originalbug": "Given a bug description, retrieve codes that need to be edited to resolve it.",
52
+ "llmsummary": "Given a summary of bug description generated by an LLM, retrieve codes that need to be edited to resolve it."
53
+ }
54
+ queries = [
55
+ 'Function to calculate the sum of two numbers',
56
+ 'Recursive function to calculate the factorial of a number',
57
+ ]
58
+
59
+ docs = [
60
+ "def add(a, b):\n return a + b",
61
+ "def factorial(n):\n return 1 if n==0 else n*factorial(n-1)",
62
  ]
63
+
64
+ query_prefix = "Instruct: "+task_name_to_instruct["general"]+"\nQuery: "
65
+ model = SentenceTransformer('nvidia/NV-EmbedCode-7b-v1', trust_remote_code=True)
66
+
67
+ query_embeddings = model.encode(queries, prompt=query_prefix, normalize_embeddings=True)
68
+ doc_embeddings = model.encode(docs, normalize_embeddings=True)
69
+
70
+ scores = util.cos_sim(query_embeddings, doc_embeddings) * 100
71
+ print(scores.tolist())
72
+ # [[68.55826568603516, 24.0609130859375], [28.60508918762207, 76.94281005859375]]
73
  ```
74
 
75
+
76
  <!--
77
  ### Direct Usage (Transformers)
78
 
 
121
  - Tokenizers: 0.15.2
122
 
123
  ## Citation
124
+ If you find this model useful in your research, please consider citing:
125
 
126
+ ```
127
+ @inproceedings{nemotroncortexa,
128
+ title={Nemotron-{CORTEXA}: Enhancing {LLM} Agents for Software Engineering Tasks via Improved Localization and Solution Diversity},
129
+ author={Atefeh Sohrabizadeh and Jialin Song and Mingjie Liu and Rajarshi Roy and Chankyu Lee and Jonathan Raiman and Bryan Catanzaro},
130
+ booktitle={Forty-second International Conference on Machine Learning},
131
+ year={2025},
132
+ url={https://openreview.net/forum?id=k6p8UKRdH7}
133
+ }
134
+ ```
135
 
136
  <!--
137
  ## Glossary