Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
code
Size:
10K - 100K
ArXiv:
DOI:
License:
ludgerpaehler
commited on
Big update of the README (#1)
Browse files- Big update of the README (03ff099ca6ee5bc8d544d1dcd53a635e472ac63f)
README.md
CHANGED
@@ -1,23 +1,50 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
-
# ComPile: A Large IR Dataset from Production Sources
|
6 |
|
7 |
-
##
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
compiler directly to extract the dataset of intermediate representations from production grade programs using our
|
13 |
[dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
|
|
|
|
18 |
|
19 |
-
|
20 |
-
|
|
|
21 |
|
22 |
```python
|
23 |
from datasets import load_dataset
|
@@ -25,18 +52,18 @@ from datasets import load_dataset
|
|
25 |
ds = load_dataset('llvm-ml/ComPile', split='train')
|
26 |
```
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
You can also load the dataset in a streaming format, where no data is saved locally:
|
33 |
|
34 |
```python
|
35 |
ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
|
36 |
```
|
37 |
|
38 |
-
|
39 |
-
|
|
|
|
|
40 |
requires more performance but might not require the whole dataset, you can also specify a portion
|
41 |
of the dataset to download. For example, the following code will only download the first 10%
|
42 |
of the dataset:
|
@@ -55,28 +82,29 @@ next(iter(ds))
|
|
55 |
ds[0]
|
56 |
```
|
57 |
|
58 |
-
Filtering and map operations can
|
59 |
-
HuggingFace `datasets` library.
|
60 |
|
61 |
-
## Dataset
|
62 |
|
|
|
63 |
Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
|
64 |
six columns associated with each row:
|
65 |
|
66 |
-
|
67 |
file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
|
68 |
like Python's `subprocess`.
|
69 |
-
|
70 |
module came from.
|
71 |
-
|
72 |
an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
|
73 |
manual curation (`manual`).
|
74 |
-
|
75 |
`/licenses/licenses-0.parquet`.
|
76 |
-
|
77 |
typically a link to a tar archive or git repository from which the project was built, but might also contain a
|
78 |
mapping to a specific package ecosystem that provides the source, such as Spack.
|
79 |
-
|
80 |
|
81 |
## Dataset Size
|
82 |
|
|
|
1 |
---
|
2 |
+
annotations_creators: []
|
3 |
+
language:
|
4 |
+
- code
|
5 |
license: cc-by-4.0
|
6 |
+
multilinguality:
|
7 |
+
- multilingual
|
8 |
+
pretty_name: ComPile
|
9 |
+
size_categories:
|
10 |
+
- unknown
|
11 |
+
source_datasets: []
|
12 |
+
task_categories:
|
13 |
+
- text-generation
|
14 |
+
task_ids: []
|
15 |
---
|
16 |
|
17 |
+
# Dataset Card for ComPile: A Large IR Dataset from Production Sources
|
18 |
|
19 |
+
## Dataset Description
|
20 |
|
21 |
+
- **Homepage:** https://llvm-ml.github.io/ComPile/
|
22 |
+
- **Paper:** https://arxiv.org/abs/2309.15432
|
23 |
+
- **Leaderboard:** N/A
|
24 |
+
|
25 |
+
### Changelog
|
26 |
+
|
27 |
+
|Release|Programming Languages|Description|
|
28 |
+
|-|-|-|
|
29 |
+
|v1.0| C/C++, Rust, Swift, Julia | Fine Tuning-scale dataset of 564GB of deduplicated LLVM IR |
|
30 |
+
|
31 |
+
### Dataset Summary
|
32 |
+
|
33 |
+
ComPile contains over 500GB of permissively-licensed source code compiled to [LLVM](https://llvm.org) intermediate representation (IR) covering C/C++, Rust, Swift, and Julia.
|
34 |
+
The dataset was created by hooking into LLVM code generation either through the language's package manager or the
|
35 |
compiler directly to extract the dataset of intermediate representations from production grade programs using our
|
36 |
[dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761).
|
37 |
|
38 |
+
### Languages
|
39 |
|
40 |
+
The dataset contains **5 programming languages** as of v1.0.
|
41 |
+
```
|
42 |
+
"c++", "c", "rust", "swift", "julia"
|
43 |
+
```
|
44 |
|
45 |
+
### Dataset Usage
|
46 |
+
|
47 |
+
To use ComPile we recommend HuggingFace's [datasets library](https://huggingface.co/docs/datasets/index). To e.g. load the dataset:
|
48 |
|
49 |
```python
|
50 |
from datasets import load_dataset
|
|
|
52 |
ds = load_dataset('llvm-ml/ComPile', split='train')
|
53 |
```
|
54 |
|
55 |
+
By default this will download the entirety of the 550GB+ dataset, and cache it locally at the directory
|
56 |
+
specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`. To
|
57 |
+
load the dataset in a streaming format, where the data is not saved locally:
|
|
|
|
|
58 |
|
59 |
```python
|
60 |
ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True)
|
61 |
```
|
62 |
|
63 |
+
For further arguments of `load_dataset`, please take a look at the
|
64 |
+
`loading a dataset` [documentation](https://huggingface.co/docs/datasets/load_hub), and
|
65 |
+
the `streaming` [documentation](https://huggingface.co/docs/datasets/stream). Bear in mind that
|
66 |
+
this is significantly slower than loading the dataset from a local storage. For experimentation that
|
67 |
requires more performance but might not require the whole dataset, you can also specify a portion
|
68 |
of the dataset to download. For example, the following code will only download the first 10%
|
69 |
of the dataset:
|
|
|
82 |
ds[0]
|
83 |
```
|
84 |
|
85 |
+
Filtering and map operations can be performed with the primitives available within the
|
86 |
+
HuggingFace `datasets` library.
|
87 |
|
88 |
+
## Dataset Structure
|
89 |
|
90 |
+
### Data Fields
|
91 |
Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are
|
92 |
six columns associated with each row:
|
93 |
|
94 |
+
- `content` (string): This column contains the raw bitcode that composes the module. This can be written to a `.bc`
|
95 |
file and manipulated using the standard llvm utilities or passed in directly through stdin if using something
|
96 |
like Python's `subprocess`.
|
97 |
+
- `license_expression` (string): This column contains the SPDX expression describing the license of the project that the
|
98 |
module came from.
|
99 |
+
- `license_source` (string): This column describes the way the `license_expression` was determined. This might indicate
|
100 |
an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate
|
101 |
manual curation (`manual`).
|
102 |
+
- `license_files`: This column contains an array of license files. These file names map to licenses included in
|
103 |
`/licenses/licenses-0.parquet`.
|
104 |
+
- `package_source` (string): This column contains information on the package that the module was sourced from. This is
|
105 |
typically a link to a tar archive or git repository from which the project was built, but might also contain a
|
106 |
mapping to a specific package ecosystem that provides the source, such as Spack.
|
107 |
+
- `language` (string): This column indicates the source language that the module was compiled from.
|
108 |
|
109 |
## Dataset Size
|
110 |
|