LuKrO's picture
Update README.md
118fffe
metadata
language:
  - en
license: unknown
size_categories:
  - n<1K
task_categories:
  - text-classification
pretty_name: Java Code Readability Merged Dataset
tags:
  - readability
  - code
  - source code
  - code readability
  - Java
features:
  - name: code_snippet
    dtype: string
  - name: score
    dtype: float
dataset_info:
  features:
    - name: code_snippet
      dtype: string
    - name: score
      dtype: float64
  splits:
    - name: train
      num_bytes: 354539
      num_examples: 421
  download_size: 139793
  dataset_size: 354539
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Java Code Readability Merged Dataset

This dataset contains 421 Java code snippets along with a readability score, aggregated from several scientific papers [1, 2, 3].

You can download the dataset using Hugging Face:

from datasets import load_dataset
ds = load_dataset("se2p/code-readability-merged")

The snippets are not split into train and test (and validation) set. Thus, the whole dataset is in the train set:

ds = ds['train']
ds_as_list = ds.to_list() # Convert the dataset to whatever format suits you best

The dataset is structured as follows:

{
  "code_snippet": ...,  # Java source code snippet
  "score": ...          # Readability score
}

The main goal of this repository is to train code readability classifiers for Java source code. The dataset is a combination and normalization of three datasets:

  1. Buse, R. P., & Weimer, W. R. (2009). Learning a metric for code readability. IEEE Transactions on software engineering, 36(4), 546-558.
  2. Dorn, J. (2012). A General Software Readability Model.
  3. Scalabrino, S., Linares‐Vásquez, M., Oliveto, R., & Poshyvanyk, D. (2018). A comprehensive model for code readability. Journal of Software: Evolution and Process, 30(6), e1958.

The raw datasets can be downloaded here.

Dataset Details

Dataset Description

  • Curated by: Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
  • Shared by: Krodinger Lukas
  • Language(s) (NLP): Java
  • License: Unknown

Uses

The dataset can be used for training Java code readability classifiers.

Dataset Structure

Each entry of the dataset consists of a code_snippet and a score. The code_snippet (string) is the code snippet that was rated in a study by multiple participants. Those could answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable. The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).

Dataset Creation

Curation Rationale

To advance code readability classification, the creation of datasets in this research field is of high importance. As a first step, we provide a merged and normalized version of existing datasets on Hugging Face. This makes access and ease of usage of this existing data easier.

Source Data

The source of the data are the papers from Buse, Dorn and Scalabrino.

Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets. The code snippets were generated from five open source Java projects.

Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets. The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).

Scalabrino conducted a survey with 9 computer science students on 200 new code snippets. The snippets were selected from four open source Java projects: jUnit, Hibernate, jFreeChart and ArgoUML.

Data Collection and Processing

The dataset was preprocessed by averaging the readability rating for each code snippet. The code snippets and ratings were then merged from the three sources.

Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria. They had a different number of participants for their surveys. One could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings. However, for simplicity those differences are ignored.

Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.

Who are the source data producers?

The source data producers are the people that wrote the used open source Java projects, as well as the study participants, which were mostly computer science students.

Personal and Sensitive Information

The ratings of the code snippets are anonymized and averaged. Thus, no personal or sensitive information is contained in this dataset.

Bias, Risks, and Limitations

The size of the dataset is very small. The ratings of code snippets were done mostly by computer science students, who do not represent the group of Java programmers in general.

Recommendations

The dataset should be used to train small Java code readability classifiers.

Citation

  1. Buse, R. P., & Weimer, W. R. (2009). Learning a metric for code readability. IEEE Transactions on software engineering, 36(4), 546-558.
  2. Dorn, J. (2012). A General Software Readability Model.
  3. Scalabrino, S., Linares‐Vásquez, M., Oliveto, R., & Poshyvanyk, D. (2018). A comprehensive model for code readability. Journal of Software: Evolution and Process, 30(6), e1958.
@article{buse2009learning,
  title={Learning a metric for code readability},
  author={Buse, Raymond PL and Weimer, Westley R},
  journal={IEEE Transactions on software engineering},
  volume={36},
  number={4},
  pages={546--558},
  year={2009},
  publisher={IEEE}
}

@inproceedings{dorn2012general,
  title={A General Software Readability Model},
  author={Jonathan Dorn},
  year={2012},
  url={https://api.semanticscholar.org/CorpusID:14098740}
}

@article{scalabrino2018comprehensive,
  title={A comprehensive model for code readability},
  author={Scalabrino, Simone and Linares-V{\'a}squez, Mario and Oliveto, Rocco and Poshyvanyk, Denys},
  journal={Journal of Software: Evolution and Process},
  volume={30},
  number={6},
  pages={e1958},
  year={2018},
  publisher={Wiley Online Library}
}

Dataset Card Authors

Lukas Krodinger, Chair of Software Engineering II, University of Passau.

Dataset Card Contact

Feel free to contact me via E-Mail if you have any questions or remarks.