File size: 3,490 Bytes
483d3de da6198a 483d3de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
---
license: mit
dataset_info:
features:
- name: filename
dtype: string
- name: cuda_source
dtype: string
- name: cuda_host
dtype: string
- name: cuda_device
dtype: string
- name: hip_source
dtype: string
- name: hip_host
dtype: string
- name: hip_device
dtype: string
splits:
- name: train
num_bytes: 18979794237
num_examples: 70694
- name: stack
num_bytes: 6087813411
num_examples: 24170
- name: synth
num_bytes: 11766271412
num_examples: 40591
- name: bench
num_bytes: 3676152
num_examples: 40
download_size: 10789629544
dataset_size: 36837555212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: stack
path: data/stack-*
- split: synth
path: data/synth-*
- split: bench
path: data/bench-*
---
# 💻 CASS: CUDA–AMD Assembly and Source Mapping
[CASS](https://huggingface.co/datasets/MBZUAI/CASS) is the **first large-scale dataset** for cross-architecture GPU transpilation, providing semantically aligned CUDA–HIP source pairs and their corresponding host/device assemblies for **NVIDIA (SASS)** and **AMD (RDNA3)** platforms. It enables research in:
* 🔁 Source-to-source translation (CUDA ↔ HIP)
* ⚙️ Assembly-level translation (SASS ↔ RDNA3)
* 🧠 LLM-guided GPU code transpilation
---
## 📚 Dataset Structure
Each sample contains the following fields:
| Field | Description |
| ------------- | ------------------------------------------ |
| `filename` | Sample ID or file name |
| `cuda_source` | Original CUDA source code |
| `cuda_host` | Compiled x86 host-side assembly from CUDA |
| `cuda_device` | Compiled SASS (Nvidia GPU) device assembly |
| `hip_source` | Transpiled HIP source code (via HIPIFY) |
| `hip_host` | Compiled x86 host-side assembly from HIP |
| `hip_device` | Compiled RDNA3 (AMD GPU) device assembly |
---
## 🔀 Dataset Splits
| Split | Description | # Examples |
| ------- | ----------------------------------------- | ---------- |
| `train` | Union of `synth`, `stack`, and `opencl` | 70,694 |
| `synth` | LLM-synthesized CUDA programs | 40,591 |
| `stack` | Scraped and filtered CUDA from StackV2 | 24,170 |
| `bench` | 40 curated eval tasks from 16 GPU domains | 40 |
---
## 📦 How to Load
```python
from datasets import load_dataset
# 🧠 Load the full dataset (default config with all splits)
cass = load_dataset("MBZUAI/cass", name="default")
# Access a specific split
train_data = cass["train"] # train = stack + synth + opencl
stack_data = cass["stack"]
synth_data = cass["synth"]
bench_data = cass["bench"]
```
---
## 📈 Benchmark and Evaluation
The `bench` split includes 40 samples across 16 domains like:
* 🧪 Physics Simulation
* 📊 Data Structures
* 📸 Image Processing
* 🧮 Linear Algebra
All samples have been manually verified for semantic equivalence across CUDA and HIP and come with executable device/host binaries.
---
## 📄 License
Released under the **MIT license**.
---
## 🔗 Useful Links
* 🤗 Hugging Face Collection: [CASS on Hugging Face](https://huggingface.co/collections/MBZUAI/cass-6825b5bf7414503cf16f87b2)
* 📂 Code & Tools: [GitHub Repository](https://github.com/GustavoStahl/CASS)
* Paper: [Arxiv CASS](https://arxiv.org/abs/2505.16968)
|