File size: 1,877 Bytes
f716b3e
 
 
 
 
 
 
 
 
090eb26
 
 
 
f716b3e
 
 
 
 
 
ce36e95
82d3625
 
 
ce36e95
 
 
3869b4a
 
ce36e95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4edb49f
 
 
 
 
ce36e95
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
dataset_info:
  features:
  - name: problem_id
    dtype: string
  - name: folder_path
    dtype: string
  splits:
  - name: train
    num_bytes: 1389
    num_examples: 28
  download_size: 2063
  dataset_size: 1389
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

> [!WARNING]
> **Removed samples**  
> We have removed the samples `asyn_fifo` and `multi_pipe_4bit` due to incompatibilities with Verilator Take this into account! If you want the full dataset of 50 samples, go to the original repo.

Disclaimer: I am not the original author and uploaded this here only for convenience, it only contains the basename of each folder as well as the relative paths to avoid overhead. Please refer to the original repo for any information. https://github.com/hkust-zhiyao/RTLLM

Notice that the `risc_cpu` experiment is missing as it was not provided by the authors.

```
@inproceedings{lu2024rtllm,
  author={Lu, Yao and Liu, Shang and Zhang, Qijun and Xie, Zhiyao},
  booktitle={2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)}, 
  title={RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model}, 
  year={2024},
  pages={722-727},
  organization={IEEE}
  }

@inproceedings{liu2024openllm,
  title={OpenLLM-RTL: Open Dataset and Benchmark for LLM-Aided Design RTL Generation(Invited)},
  author={Liu, Shang and Lu, Yao and Fang, Wenji and Li, Mengming and Xie, Zhiyao},
  booktitle={Proceedings of 2024 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)},
  year={2024},
  organization={ACM}
}

```

---

This dataset was uploaded to support reproducibility of the benchmarks implemented in TuRTLe, a framework to assess LLMs across key RTL generation tasks.
- Github repository: https://github.com/HPAI-BSC/TuRTLe
- ArXiv preprint: https://arxiv.org/abs/2504.01986