smajumdar94 commited on
Commit
855a988
·
verified ·
1 Parent(s): cf1950e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -53
README.md CHANGED
@@ -13,89 +13,120 @@ tags:
13
  pipeline_tag: text-generation
14
  ---
15
 
16
- # OpenCodeReasoning-CPP-Nemotron-32B Overview
17
 
18
- ## Description
 
19
 
20
- OpenCodeReasoning-CPP-Nemotron-32B is a large language model (LLM) which is a derivative of [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) (AKA the *reference model*).
21
- It is a reasoning model that is post trained for reasoning while code generation. The model supports a context length of 32K tokens.
22
 
23
- This model is ready for commercial use.
24
 
25
- ### License/Terms of Use
26
- GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Internal Scientific Research and Development Model License.](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-internal-scientific-research-and-development-model-license/)
27
 
28
- ### Deployment Geography:
29
- Global<br>
30
 
31
- ### Use Case: <br>
32
- This model is intended for developers and researchers building LLMs. <br>
33
 
34
- ### Release Date: <br>
35
- Huggingface [04/25/2025] via https://huggingface.co/nvidia/OpenCodeReasoning-CPP-Nemotron-32B/ <br>
 
 
 
 
 
 
 
36
 
37
 
38
- ## References
39
- - [\[2504.01943\] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding](https://arxiv.org/abs/2504.01943)
40
 
 
 
 
41
 
42
- ## Model Architecture
43
- - Architecture Type: Dense decoder-only Transformer model
44
- - Network Architecture: Qwen2.5-32B-Instruct
45
 
 
46
 
47
- ## Input
48
- - **Input Type(s):** Text <br>
49
- - **Input Format(s):** String <br>
50
- - **Input Parameters:** One-Dimensional (1D) <br>
51
- - **Other Properties Related to Input:** Context length up to 32,768 tokens <br>
52
 
 
 
 
53
 
54
- ## Output
55
- - **Output Type(s):** Text <br>
56
- - **Output Format:** String <br>
57
- - **Output Parameters:** One-Dimensional (1D) <br>
58
- - **Other Properties Related to Output:** Context length up to 32,768 tokens <br>
59
 
60
- Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
 
 
 
 
 
61
 
 
62
 
63
- ## Software Integration
64
- * Runtime Engine: Transformers, vLLM <br>
65
- * Recommended Hardware Microarchitecture Compatibility: <br>
66
- - NVIDIA Ampere
67
- - NVIDIA Hopper
68
- * Preferred/Supported Operating System(s): Linux <br>
69
 
 
 
 
 
70
 
71
- ## Model Version(s)
72
- 1.0 (4/25/2025) <br>
73
 
 
 
 
 
 
74
 
75
- ## Training Dataset
76
- The training corpus for OpenCodeReasoning-CPP-Nemotron-32B is [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, which is composed of competitive programming questions and DeepSeek-R1 generated responses in C++.
77
- * Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
78
- * Data Labeling Method: Hybrid: Automated, Human, Synthetic <br>
 
 
79
 
 
80
 
81
- ## Evaluation Dataset
82
- We used the [IOI benchmark](https://huggingface.co/datasets/open-r1/ioi) to evaluate OpenCodeReasoning-CPP-Nemotron-32B. <br>
83
- * Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
84
- * Data Labeling Method: Hybrid: Automated, Human, Synthetic <br>
85
 
 
86
 
87
- ## Inference
88
- - **Engine:** vLLM <br>
89
- - **Test Hardware** NVIDIA H100-80GB <br>
 
 
 
90
 
 
91
 
92
- ## Ethical Considerations:
 
 
 
 
 
 
 
 
93
 
94
- NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
 
 
 
 
95
 
96
- For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](./EXPLAINABILITY.md), [Bias](./BIAS.md), [Safety & Security](./SAFETY_and_SECURITY.md), and [Privacy](./PRIVACY.md) Subcards.
 
 
 
 
 
97
 
98
- Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
99
 
100
 
101
  ## Citation
@@ -110,4 +141,83 @@ If you find the data useful, please cite:
110
  archivePrefix={arXiv},
111
  primaryClass={cs.CL},
112
  url={https://arxiv.org/abs/2504.01943},
113
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  pipeline_tag: text-generation
14
  ---
15
 
16
+ # OpenCodeReasoning-Nemotron-32B-IOI Overview
17
 
18
+ ## Description: <br>
19
+ OpenCodeReasoning-Nemotron-32B-IOI is a large language model (LLM) which is a derivative of Qwen2.5-32B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning for code generation. The model supports a context length of 32K tokens. <br>
20
 
21
+ This model is ready for commercial/non-commercial use. <br>
 
22
 
23
+ ![Evaluation Results](./results.png)
24
 
 
 
25
 
26
+ ## Results from [OpenCodeReasoning](https://arxiv.org/abs/2504.01943)
 
27
 
28
+ Below results are the average of **64 evaluations** on each benchmark.
 
29
 
30
+ | Model | Dataset Size Python | C++ | LiveCodeBench (pass@1) | CodeContests (pass@1) | IOI (Total Score) |
31
+ |---------------------------|---------------------|--------|------------------------|-----------------------|-------------------|
32
+ | OlympicCoder-7B | 0 | 100K | 40.9 | 10.6 | 127 |
33
+ | OlympicCoder-32B | 0 | 100K | 57.4 | 18.0 | 153.5 |
34
+ | QWQ-32B | - | - | 61.3 | 20.2 | 175.5 |
35
+ | | | | | | |
36
+ | **OpenCodeReasoning-IOI** | | | | | |
37
+ | | | | | | |
38
+ | **OCR-Qwen-32B-Instruct** | **736K** | **356K**| **61.5** | **25.5** | **175.5** |
39
 
40
 
41
+ ## Reproducing our results
 
42
 
43
+ * [Models](https://huggingface.co/collections/nvidia/opencodereasoning-2-68168f37cd7c6beb1e3f92e7)
44
+ * [Dataset](https://huggingface.co/datasets/nvidia/OpenCodeReasoning)
45
+ * [Paper](https://arxiv.org/abs/2504.01943)
46
 
 
 
 
47
 
48
+ ## How to use the models?
49
 
50
+ To run inference on coding problems for IOI Benchmark:
 
 
 
 
51
 
52
+ ````python
53
+ import transformers
54
+ import torch
55
 
56
+ model_id = "nvidia/OpenCodeReasoning-Nemotron-32B-IOI"
 
 
 
 
57
 
58
+ pipeline = transformers.pipeline(
59
+ "text-generation",
60
+ model=model_id,
61
+ model_kwargs={"torch_dtype": torch.bfloat16},
62
+ device_map="auto",
63
+ )
64
 
65
+ prompt = """You are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.
66
 
67
+ Please use c++ programming language only.
 
 
 
 
 
68
 
69
+ You must use ```cpp for just the final solution code block with the following format:
70
+ ```cpp
71
+ // Your code here
72
+ ```
73
 
74
+ {user}
75
+ """
76
 
77
+ messages = [
78
+ {
79
+ "role": "user",
80
+ "content": prompt.format(user="Write a program to calculate the sum of the first $N$ fibonacci numbers")},
81
+ ]
82
 
83
+ outputs = pipeline(
84
+ messages,
85
+ max_new_tokens=32768,
86
+ )
87
+ print(outputs[0]["generated_text"][-1]['content'])
88
+ ````
89
 
90
+ To run inference on coding problems for python programs:
91
 
92
+ ````python
93
+ import transformers
94
+ import torch
 
95
 
96
+ model_id = "nvidia/OpenCodeReasoning-Nemotron-32B"
97
 
98
+ pipeline = transformers.pipeline(
99
+ "text-generation",
100
+ model=model_id,
101
+ model_kwargs={"torch_dtype": torch.bfloat16},
102
+ device_map="auto",
103
+ )
104
 
105
+ prompt = """You are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.
106
 
107
+ Please use python programming language only.
108
+
109
+ You must use ```python for just the final solution code block with the following format:
110
+ ```python
111
+ # Your code here
112
+ ```
113
+
114
+ {user}
115
+ """
116
 
117
+ messages = [
118
+ {
119
+ "role": "user",
120
+ "content": prompt.format(user="Write a program to calculate the sum of the first $N$ fibonacci numbers")},
121
+ ]
122
 
123
+ outputs = pipeline(
124
+ messages,
125
+ max_new_tokens=32768,
126
+ )
127
+ print(outputs[0]["generated_text"][-1]['content'])
128
+ ````
129
 
 
130
 
131
 
132
  ## Citation
 
141
  archivePrefix={arXiv},
142
  primaryClass={cs.CL},
143
  url={https://arxiv.org/abs/2504.01943},
144
+ }
145
+ ```
146
+
147
+ ## Additional Information
148
+
149
+ ## Model Architecture: <br>
150
+ Architecture Type: Dense decoder-only Transformer model
151
+ Network Architecture: Qwen-32B-Instruct
152
+ <br>
153
+ **This model was developed based on Qwen2.5-32B-Instruct and has 32B model parameters. <br>**
154
+ **OpenCodeReasoning-Nemotron-32B was developed based on Qwen2.5-32B-Instruct and has 32B model parameters. <br>**
155
+
156
+ ## Input: <br>
157
+ **Input Type(s):** Text <br>
158
+ **Input Format(s):** String <br>
159
+ **Input Parameters:** One-Dimensional (1D) <br>
160
+ **Other Properties Related to Input:** Context length up to 32,768 tokens <br>
161
+
162
+ ## Output: <br>
163
+ **Output Type(s):** Text <br>
164
+ **Output Format:** String <br>
165
+ **Output Parameters:** One-Dimensional (1D) <br>
166
+ **Other Properties Related to Output:** Context length up to 32,768 tokens <br>
167
+
168
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
169
+
170
+ ## Software Integration : <br>
171
+ * Runtime Engine: NeMo 2.3.0 <br>
172
+ * Recommended Hardware Microarchitecture Compatibility: <br>
173
+ NVIDIA Ampere <br>
174
+ NVIDIA Hopper <br>
175
+ * Preferred/Supported Operating System(s): Linux <br>
176
+
177
+ ## Model Version(s):
178
+ 1.0 (4/25/2025) <br>
179
+ OpenCodeReasoning-Nemotron-7B<br>
180
+ OpenCodeReasoning-Nemotron-14B<br>
181
+ OpenCodeReasoning-Nemotron-32B<br>
182
+ OpenCodeReasoning-Nemotron-32B-IOI<br>
183
+
184
+
185
+ # Training and Evaluation Datasets: <br>
186
+
187
+ ## Training Dataset:
188
+
189
+ The training corpus for OpenCodeReasoning-Nemotron-32B is [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, which is composed of competitive programming questions and DeepSeek-R1 generated responses.
190
+
191
+ Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
192
+ Labeling Method: Hybrid: Automated, Human, Synthetic <br>
193
+ Properties: 736k samples from OpenCodeReasoning (https://huggingface.co/datasets/nvidia/OpenCodeReasoning)
194
+
195
+ ## Evaluation Dataset:
196
+ We used the datasets listed in the next section to evaluate OpenCodeReasoning-Nemotron-32B. <br>
197
+ **Data Collection Method: Hybrid: Automated, Human, Synthetic <br>**
198
+ **Labeling Method: Hybrid: Automated, Human, Synthetic <br>**
199
+
200
+ ### License/Terms of Use: <br>
201
+ GOVERNING TERMS: Use of this model is governed by [Apache 2.0](https://huggingface.co/nvidia/OpenCode-Nemotron-2-7B/blob/main/LICENSE).
202
+
203
+ ### Deployment Geography:
204
+ Global<br>
205
+
206
+ ### Use Case: <br>
207
+ This model is intended for developers and researchers building LLMs. <br>
208
+
209
+ ### Release Date: <br>
210
+ Huggingface [04/25/2025] via https://huggingface.co/nvidia/OpenCodeReasoning-Nemotron-32B/ <br>
211
+
212
+ ## Reference(s):
213
+ [2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
214
+ <br>
215
+
216
+ ## Inference:
217
+ **Engine:** vLLM <br>
218
+ **Test Hardware** NVIDIA H100-80GB <br>
219
+
220
+ ## Ethical Considerations:
221
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
222
+
223
+ Please report security vulnerabilities or NVIDIA AI Concerns here.