Atefeh Sohrabizadeh
commited on
Commit
·
e656c91
1
Parent(s):
6c0272b
updated code
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ The use of this model is governed by the [NVIDIA AI Foundation Models Community
|
|
34 |
- **Pooling mode:** mean tokens
|
35 |
|
36 |
## Evaluation Results:
|
37 |
-
We evaluated NV-EmbedCode model using the [CoIR benchmark](https://arxiv.org/html/2407.02883v1) and a curated set based on [SWE-bench](https://arxiv.org/abs/2310.06770). CoIR consists of 10 code datasets across four retrieval tasks: (1) Text-to-Code Retrieval, (2) Code-to-Code Retrieval, (3) Code-to-Text Retrieval, and (4) Hybrid Code Retrieval. The default evaluation metric for CoIR is average NDCG@10 across all datasets. SWE-bench originally consists of real-world software engineering problems from GitHub issues and their corresponding pull requests. We adapted it into a retrieval task, where the goal is to identify the files that need to be edited to resolve an issue. These files are identified using the pull request that solved the issue. For SWE-bench Lite, we use Recall@1 to measure whether the top retrieved file is the correct one for resolving the issue, as each instance typically involves editing just one file. For more detailed evaluation results, please refer to [our paper](https://openreview.net/forum?id=k6p8UKRdH7).
|
38 |
<br>
|
39 |
|
40 |
| Retrieval Method | CoIR Main Score (NDCG@10) | SWE-bench Lite (Recall@1) |
|
@@ -60,14 +60,17 @@ Then you can load this model and run inference.
|
|
60 |
```python
|
61 |
from sentence_transformers import SentenceTransformer, util
|
62 |
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
|
|
|
|
|
|
68 |
queries = [
|
69 |
-
|
70 |
-
|
71 |
]
|
72 |
|
73 |
docs = [
|
@@ -75,13 +78,18 @@ docs = [
|
|
75 |
"def factorial(n):\n return 1 if n==0 else n*factorial(n-1)",
|
76 |
]
|
77 |
|
78 |
-
|
|
|
|
|
|
|
79 |
model = SentenceTransformer('nvidia/NV-EmbedCode-7b-v1', trust_remote_code=True)
|
80 |
|
81 |
-
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
85 |
print(scores.tolist())
|
86 |
# [[68.55826568603516, 24.0609130859375], [28.60508918762207, 76.94281005859375]]
|
87 |
```
|
|
|
34 |
- **Pooling mode:** mean tokens
|
35 |
|
36 |
## Evaluation Results:
|
37 |
+
We evaluated NV-EmbedCode model using the [CoIR benchmark](https://arxiv.org/html/2407.02883v1) and a curated set based on [SWE-bench](https://arxiv.org/abs/2310.06770). CoIR consists of 10 code datasets across four retrieval tasks: (1) Text-to-Code Retrieval, (2) Code-to-Code Retrieval, (3) Code-to-Text Retrieval, and (4) Hybrid Code Retrieval. The default evaluation metric for CoIR is average NDCG@10 across all datasets. SWE-bench originally consists of real-world software engineering problems from GitHub issues and their corresponding pull requests. We adapted it into a retrieval task, where the goal is to identify the files that need to be edited to resolve an issue. These files are identified using the pull request that solved the issue. For SWE-bench Lite, we use Recall@1 to measure whether the top retrieved file is the correct one for resolving the issue, as each instance typically involves editing just one file. For more detailed evaluation results on SWE-bench, please refer to [our paper](https://openreview.net/forum?id=k6p8UKRdH7).
|
38 |
<br>
|
39 |
|
40 |
| Retrieval Method | CoIR Main Score (NDCG@10) | SWE-bench Lite (Recall@1) |
|
|
|
60 |
```python
|
61 |
from sentence_transformers import SentenceTransformer, util
|
62 |
|
63 |
+
# Task instructions for different retrieval scenarios
|
64 |
+
task_instructions = {
|
65 |
+
"general": "Retrieve code or text based on user query",
|
66 |
+
"originalbug": "Given a bug description, retrieve codes that need to be edited to resolve it.",
|
67 |
+
"llmsummary": "Given a summary of bug description generated by an LLM, retrieve codes that need to be edited to resolve it."
|
68 |
+
}
|
69 |
+
|
70 |
+
# Example queries and corpus
|
71 |
queries = [
|
72 |
+
"Function to calculate the sum of two numbers",
|
73 |
+
"Recursive function to calculate the factorial of a number",
|
74 |
]
|
75 |
|
76 |
docs = [
|
|
|
78 |
"def factorial(n):\n return 1 if n==0 else n*factorial(n-1)",
|
79 |
]
|
80 |
|
81 |
+
# Prepare prompt prefix for corpus
|
82 |
+
query_prefix = f"Instruct: {task_instructions['general']}\nQuery: "
|
83 |
+
|
84 |
+
# Load model
|
85 |
model = SentenceTransformer('nvidia/NV-EmbedCode-7b-v1', trust_remote_code=True)
|
86 |
|
87 |
+
# Encode queries and documents
|
88 |
+
query_emb = model.encode(queries, prompt=query_prefix, normalize_embeddings=True)
|
89 |
+
doc_emb = model.encode(docs, normalize_embeddings=True)
|
90 |
|
91 |
+
# Compute similarity scores
|
92 |
+
scores = util.cos_sim(query_emb, doc_emb) * 100
|
93 |
print(scores.tolist())
|
94 |
# [[68.55826568603516, 24.0609130859375], [28.60508918762207, 76.94281005859375]]
|
95 |
```
|