Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
code
Size:
10K - 100K
ArXiv:
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,87 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
---
|
4 |
|
5 |
# ComPile: A Large IR Dataset from Production Sources
|
6 |
|
7 |
## About
|
8 |
|
9 |
-
Utilizing the LLVM compiler infrastructur shared by a number of languages, ComPile is a
|
10 |
LLVM IR. The dataset is generated from programming languages built on the shared LLVM infrastructure, including Rust,
|
11 |
Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the
|
12 |
compiler directly to extract the dataset of intermediate representations from production grade programs using our
|
13 |
-
[dataset collection utility for the LLVM compilation infrastructure](https://
|
14 |
-
|
15 |
|
16 |
For an in-depth look at the statistical properties of dataset, please have a look at our [arXiv preprint](https://arxiv.org/abs/2309.15432).
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
---
|
4 |
|
5 |
# ComPile: A Large IR Dataset from Production Sources
|
6 |
|
7 |
## About
|
8 |
|
9 |
+
Utilizing the LLVM compiler infrastructur shared by a number of languages, ComPile is a large dataset of
|
10 |
LLVM IR. The dataset is generated from programming languages built on the shared LLVM infrastructure, including Rust,
|
11 |
Swift, Julia, and C/C++, by hooking into LLVM code generation either through the language's package manager or the
|
12 |
compiler directly to extract the dataset of intermediate representations from production grade programs using our
|
13 |
+
[dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).
|
|
|
14 |
|
15 |
For an in-depth look at the statistical properties of dataset, please have a look at our [arXiv preprint](https://arxiv.org/abs/2309.15432).
|
16 |
|
17 |
+
## Usage
|
18 |
+
|
19 |
+
Using ComPile is relatively simple with HuggingFace's `datasets` library. To load the dataset, you can simply
|
20 |
+
run the following in a Python interpreter or within a Python script:
|
21 |
+
|
22 |
+
```python
|
23 |
+
from datasets import load_dataset
|
24 |
+
|
25 |
+
ds = load_dataset('llvm-ml/ComPile', split='train')
|
26 |
+
```
|
27 |
+
|
28 |
+
While this will just work, the download will take quite a while as `datasets` by default will download
|
29 |
+
all 550GB+ within the dataset and cache it locally. Note that the data will be placed in the directory
|
30 |
+
specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`.
|
31 |
+
|
32 |
+
You can also load the dataset in a streaming format, where no data is saved locally:
|
33 |
+
|
34 |
+
```python
|
35 |
+
ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
|
36 |
+
```
|
37 |
+
|
38 |
+
This makes experimentation much easier as no upfront large time investment is required, but is
|
39 |
+
significantly slower than loading in the dataset from the local disk. For experimentation that
|
40 |
+
requires more performance but might not require the whole dataset, you can also specify a portion
|
41 |
+
of the dataset to download. For example, the following code will only download the first 10%
|
42 |
+
of the dataset:
|
43 |
+
|
44 |
+
```python
|
45 |
+
ds = load_dataset('llvm-ml/ComPile', split='train[:10%]')
|
46 |
+
```
|
47 |
+
|
48 |
+
Once the dataset has been loaded, the individual module files can be accessed by iterating through
|
49 |
+
the dataset or accessing specific indices:
|
50 |
+
|
51 |
+
```python
|
52 |
+
# We can iterate through the dataset
|
53 |
+
next(iter(ds))
|
54 |
+
# We can also access modules at specific indices
|
55 |
+
ds[0]
|
56 |
+
```
|
57 |
+
|
58 |
+
Filtering and map operations can also be efficiently applied using primitives available within the
|
59 |
+
HuggingFace `datasets` library. More documentation is available [here](https://huggingface.co/docs/datasets/index).
|
60 |
+
|
61 |
+
## Dataset Format
|
62 |
+
|
63 |
+
Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
|
64 |
+
six columns associated with each row:
|
65 |
+
|
66 |
+
1. `content` - This column contains the raw bitcode that composes the module. This can be written to a `.bc`
|
67 |
+
file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
|
68 |
+
like Python's `subprocess`.
|
69 |
+
2. `license_expression` - This column contains the SPDX expression describing the license of the project that the
|
70 |
+
module came from.
|
71 |
+
3. `license_source` - This column describes the way the `license_expression` was determined. This might indicate
|
72 |
+
an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
|
73 |
+
manual curation (`manual`).
|
74 |
+
4. `license_files` - This column contains an array of license files. These file names map to licenses included in
|
75 |
+
`/licenses/licenses-0.parquet`.
|
76 |
+
5. `package_source` - This column contains information on the package that the module was sourced from. This is
|
77 |
+
typically a link to a tar archive or git repository from which the project was built, but might also contain a
|
78 |
+
mapping to a specific package ecosystem that provides the source, such as Spack.
|
79 |
+
6. `language` - This column indicates the source language that the module was compiled from.
|
80 |
+
|
81 |
+
## Licensing
|
82 |
+
|
83 |
+
The individual modules within the dataset are subject to the licenses of the projects that they come from. License
|
84 |
+
information is available in each row, including the SPDX license expression, the license files, and also a link to
|
85 |
+
the package source where license information can be further validated.
|
86 |
|
87 |
+
The curation of these modules is licensed under a CC-BY-4.0 license.
|