Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
File size: 1,330 Bytes
f97d5f5
 
 
 
 
3166e0a
 
 
 
 
 
f97d5f5
3166e0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: other
license_name: deepseek
license_link: >-
  https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct/blob/main/LICENSE
task_categories:
- text-generation
language:
- en
tags:
- code
---

## LeetCode Contest Benchmark

A new benchmark for evaluating Code LLMs proposed by [DeepSeek-Coder](https://arxiv.org/abs/2401.14196), which consists of the latest algorithm problems of different difficulties.

## Usage

```
git clone https://github.com/deepseek-ai/DeepSeek-Coder.git
cd Evaluation/LeetCode

# Set the model or path here
MODEL="deepseek-ai/deepseek-coder-7b-instruct"

python vllm_inference.py --model_name_or_path $MODEL --saved_path output/20240121-Jul.deepseek-coder-7b-instruct.jsonl

python evaluate_leetcode.py --generation_path output/20240121-Jul.deepseek-coder-7b-instruct.jsonl --result_path output/20240121-Jul.deepseek-coder-7b-instruct.result.jsonl
```

### Citation

```
@article{guo2024deepseekcoder,
  title   = {DeepSeek-Coder: When the Large Language Model Meets Programming - The Rise of Code Intelligence},
  author  = {Daya Guo and Qihao Zhu and Dejian Yang and Zhenda Xie and Kai Dong and Wentao Zhang and Guanting Chen and Xiao Bi and Y. Wu and Y. K. Li and Fuli Luo and Yingfei Xiong and Wenfeng Liang},
  year    = {2024},
  journal = {arXiv preprint arXiv: 2401.14196}
}
```