LuKrO's picture
Update README.md
bb0e395
|
raw
history blame
6.51 kB
metadata
license: unknown
task_categories:
  - text-classification
language:
  - en
tags:
  - readability
  - code
  - source code
  - code readability
  - Java
pretty_name: Java Code Readability Merged Dataset
size_categories:
  - n<1K
features:
  - name: code_snippet
    dtype: string
  - name: score
    dtype: float

Java Code Readability Merged Dataset

This dataset contains 421 Java code snippets along with a readability score.

You can download the dataset using Hugging Face:

from datasets import load_dataset
ds = load_dataset("se2p/code-readability-merged")

The dataset is structured as follows:

{
  "code_snippet": ...,  # Java source code snippet.
  "score": ...          # Readability score
}

The main goal of this repository is to train code readability classifiers for Java source code. The dataset is a combination and normalization of three datasets:

  • Buse, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
  • Dorn, Jonathan. “A General Software Readability Model.” (2012).
  • Scalabrino, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.

The raw datasets can be downloaded here.

Dataset Details

Dataset Description

  • Curated by: Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
  • Shared by: Krodinger Lukas
  • Language(s) (NLP): Java
  • License: Unknown

Uses

The dataset can be used for training Java code readability classifiers.

Dataset Structure

Each entry of the dataset consists of a code_snippet and a score. The code_snippet (string) is the code snippet that was rated in a study by multiple participants. Those could answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable. The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).

The snippets are not split into train and test (and validation) set.

Dataset Creation

Curation Rationale

To advance code readability classification, the creation of datasets in this research field is of high importance. As a first step, we provide a merged and normalized version of existing datasets on Hugging Face. This makes access and ease of usage of this existing data easier.

Source Data

The source of the data are the papers from Buse, Dorn and Scalabrino.

Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets. The code snippets were generated from five open source Java projects.

Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets. The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).

Scalabrino conducted a survey with 9 computer science students on 200 new code snippets. The snippets were selected from four open source Java projects: jUnit, Hibernate, jFreeChart and ArgoUML.

Data Collection and Processing

The dataset was preprocessed by averaging the readability rating for each code snippet. The code snippets and ratings were then merged from the three sources.

Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria. They had a different number of participants for their surveys. One could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings. However, for simplicity those differences are ignored.

Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.

Who are the source data producers?

The source data producers are the people that wrote the used open source Java projects, as well as the study participants, which were mostly computer science students.

Personal and Sensitive Information

The ratings of the code snippets are anonymized and averaged. Thus, no personal or sensitive information is contained in this dataset.

Bias, Risks, and Limitations

The size of the dataset is very small. The ratings of code snippets were done mostly by computer science students, who do not represent the group of Java programmers in general.

Recommendations

The dataset should be used to train small Java code readability classifiers.

Citation

BibTeX:

@article{buse2009learning,
    title={Learning a metric for code readability},
    author={Buse, Raymond PL and Weimer, Westley R},
    journal={IEEE Transactions on software engineering},
    volume={36},
    number={4},
    pages={546--558},
    year={2009},
    publisher={IEEE}
}

@inproceedings{dorn2012general,
  title={A General Software Readability Model},
  author={Jonathan Dorn},
  year={2012},
  url={https://api.semanticscholar.org/CorpusID:14098740}
}

@inproceedings{scalabrino2016improving,
    title={Improving code readability models with textual features},
    author={Scalabrino, Simone and Linares-Vasquez, Mario and Poshyvanyk, Denys and Oliveto, Rocco},
    booktitle={2016 IEEE 24th International Conference on Program Comprehension (ICPC)},
    pages={1--10},
    year={2016},
    organization={IEEE}
}

APA:

  • Buse, Raymond PL, and Westley R. Weimer. "Learning a metric for code readability." IEEE Transactions on software engineering 36.4 (2009): 546-558.
  • Dorn, Jonathan. “A General Software Readability Model.” (2012).
  • Scalabrino, Simone, et al. "Automatically assessing code understandability: How far are we?." 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 2017.

Glossary

Readability: We define readability as a subjective impression of the difficulty of code while trying to understand it.

Dataset Card Authors

Lukas Krodinger, Chair of Software Engineering II, University of Passau.

Dataset Card Contact

Feel free to contact me via E-Mail if you have any questions or remarks.