metadata
language:
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- text-generation
paperswithcode_id: disl
pretty_name: 'DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts'
configs:
- config_name: decomposed
data_files: data/decomposed/*
- config_name: raw
data_files: data/raw/*
tags:
- code
- solidity
- smart contracts
- webdataset
DISL
The DISL dataset features a collection of 514506 unique Solidity files that have been deployed to Ethereum mainnet. It caters to the need for a large and diverse dataset of real-world smart contracts. DISL serves as a resource for developing machine learning systems and for benchmarking software engineering tools designed for smart contracts.
Content
- the raw subset has full contracts source code and it's not deduplicated, it has 3,298,271 smart contracts
- the decomposed subset contains Solidity files, it is derived from raw, it is deduplicated using Jaccard similarity with a threshold of 0.9, it has 514,506 Solidity files
- the cuttoff date is January 15, 2024, approx at block 19010000
If you use DISL, please cite the following tech report:
@techreport{disl2403.16861,
title = {DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts},
year = {2024},
author = {Gabriele Morello and Mojtaba Eshghie and Sofia Bobadilla and Martin Monperrus},
url = {http://arxiv.org/pdf/2403.16861},
number = {2403.16861},
institution = {arXiv},
}
- Curated by: Gabriele Morello
Instructions to explore the dataset
from datasets import load_dataset
# Load the raw dataset
dataset = load_dataset("ASSERT-KTH/DISL", "raw")
# OR
# Load the decomposed dataset
dataset = load_dataset("ASSERT-KTH/DISL", "decomposed")
# number of rows and columns
num_rows = len(dataset["train"])
num_columns = len(dataset["train"].column_names)
# random row
import random
random_row = random.choice(dataset["train"])
# random source code
random_sc = random.choice(dataset["train"])['source_code']
print(random_sc)