Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
cass / README.md
ahmedheakl's picture
Update README.md
da6198a verified
metadata
license: mit
dataset_info:
  features:
    - name: filename
      dtype: string
    - name: cuda_source
      dtype: string
    - name: cuda_host
      dtype: string
    - name: cuda_device
      dtype: string
    - name: hip_source
      dtype: string
    - name: hip_host
      dtype: string
    - name: hip_device
      dtype: string
  splits:
    - name: train
      num_bytes: 18979794237
      num_examples: 70694
    - name: stack
      num_bytes: 6087813411
      num_examples: 24170
    - name: synth
      num_bytes: 11766271412
      num_examples: 40591
    - name: bench
      num_bytes: 3676152
      num_examples: 40
  download_size: 10789629544
  dataset_size: 36837555212
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: stack
        path: data/stack-*
      - split: synth
        path: data/synth-*
      - split: bench
        path: data/bench-*

๐Ÿ’ป CASS: CUDAโ€“AMD Assembly and Source Mapping

CASS is the first large-scale dataset for cross-architecture GPU transpilation, providing semantically aligned CUDAโ€“HIP source pairs and their corresponding host/device assemblies for NVIDIA (SASS) and AMD (RDNA3) platforms. It enables research in:

  • ๐Ÿ” Source-to-source translation (CUDA โ†” HIP)
  • โš™๏ธ Assembly-level translation (SASS โ†” RDNA3)
  • ๐Ÿง  LLM-guided GPU code transpilation

๐Ÿ“š Dataset Structure

Each sample contains the following fields:

Field Description
filename Sample ID or file name
cuda_source Original CUDA source code
cuda_host Compiled x86 host-side assembly from CUDA
cuda_device Compiled SASS (Nvidia GPU) device assembly
hip_source Transpiled HIP source code (via HIPIFY)
hip_host Compiled x86 host-side assembly from HIP
hip_device Compiled RDNA3 (AMD GPU) device assembly

๐Ÿ”€ Dataset Splits

Split Description # Examples
train Union of synth, stack, and opencl 70,694
synth LLM-synthesized CUDA programs 40,591
stack Scraped and filtered CUDA from StackV2 24,170
bench 40 curated eval tasks from 16 GPU domains 40

๐Ÿ“ฆ How to Load

from datasets import load_dataset

# ๐Ÿง  Load the full dataset (default config with all splits)
cass = load_dataset("MBZUAI/cass", name="default")

# Access a specific split
train_data = cass["train"]     # train = stack + synth + opencl
stack_data = cass["stack"]
synth_data = cass["synth"]
bench_data = cass["bench"]

๐Ÿ“ˆ Benchmark and Evaluation

The bench split includes 40 samples across 16 domains like:

  • ๐Ÿงช Physics Simulation
  • ๐Ÿ“Š Data Structures
  • ๐Ÿ“ธ Image Processing
  • ๐Ÿงฎ Linear Algebra

All samples have been manually verified for semantic equivalence across CUDA and HIP and come with executable device/host binaries.


๐Ÿ“„ License

Released under the MIT license.


๐Ÿ”— Useful Links