bigcode-pii-dataset / README.md
loubnabnl's picture
loubnabnl HF staff
Update README.md
eb952c9
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: type
      dtype: string
    - name: language
      dtype: string
    - name: fragments
      list:
        - name: category
          dtype: string
        - name: position
          sequence: int64
        - name: value
          dtype: string
    - name: id
      dtype: int64
  splits:
    - name: test
      num_bytes: 22496122
      num_examples: 12099
  download_size: 9152605
  dataset_size: 22496122
language:
  - code
task_categories:
  - token-classification
extra_gated_prompt: >-
  ## Terms of Use for the dataset


  This is an annotated dataset for Personal Identifiable Information (PII) in
  code. We ask that you read and agree to the following Terms of Use before
  using the dataset and fill this
  [form](https://docs.google.com/forms/d/e/1FAIpQLSfiWKyBB8-PxOCLo-KMsLlYNyQNJEzxJw0gcUAUHT3UY848qA/viewform):

  **Incomplete answers to the form will result in the request for access being
  ignored, with no follow-up actions by BigCode.**

  1. You agree that you will not use the PII dataset for any purpose other than
  training or evaluating models for PII removal from datasets.

  2. You agree that you will not share the PII dataset or any modified versions
  for whatever purpose.

  3. Unless required by applicable law or agreed to in writing, the dataset is
  provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
  either express or implied, including, without limitation, any warranties or
  conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
  PARTICULAR PURPOSE. You are solely responsible for determining the
  appropriateness of using the dataset, and assume any risks associated with
  your exercise of permissions under these Terms of Use.

  4. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
  DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
  OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE
  OR OTHER DEALINGS IN THE DATASET.
extra_gated_fields:
  Email: text
  I have read the License and agree with its terms: checkbox

PII dataset

Dataset description

This is an annotated dataset for Personal Identifiable Information (PII) in code. The target entities are: Names, Usernames, Emails, IP addresses, Keys, Passwords, and IDs. The annotation process involved 1,399 crowd-workers from 35 countries with Toloka. It consists of 12,099 samples of ~50 lines of code in 31 programming languages. You can also find a PII detection model that we trained on this dataset at bigcode-pii-model.

Dataset Structure

You can load the dataset with:

from datasets import load_dataset

ds = load_dataset("bigcode/bigcode-pii-dataset", use_auth_token=True)
ds
DatasetDict({
    test: Dataset({
        features: ['text', 'type', 'language', 'fragments', 'id'],
        num_rows: 12099
    })
})

It has the following data fields:

  • text: the code snippet
  • type: indicated if the data was pre-filtered with regexes (before annotation we selected 7100 files that were pre-filtered as positive for PII with regexes, and selected 5199 randomly)
  • language: programming language
  • fragments: detected secrets and their positions and categories
    • category: PII category
    • position: start and end
    • value: PII value

Statistics

Figure below shows the distribution of programming languages in the dataset:

The following table shows the distribution of PII in all classes, as well as annotation quality after manual inspection of 300 diverse files from the dataset:

Entity Count Precision Recall
IP_ADDRESS 2526 85% 97%
KEY 308 91% 78%
PASSWORD 598 91% 86%
ID 1702 53% 51%
EMAIL 5470 99% 97%
EMAIL_EXAMPLE 1407
EMAIL_LICENSE 3141
NAME 2477 89% 94%
NAME_EXAMPLE 318
NAME_LICENSE 3105
USERNAME 780 74% 86%
USERNAME_EXAMPLE 328
USERNAME_LICENSE 503
AMBIGUOUS 287

AMBIGUOUS and ID were not used in our NER model training for PII detection.

Dataset Creation

We selected the annotation samples from The Stack dataset after deduplication, a collection of code from open permissively licensed repositories on GitHub. To increase the representation of rare PII types, such as keys and IP addresses, we pre-filtered 7100 files from a larger sample. This pre-filtering was carried out using the detect-secrets tool with all default plugins activated, in addition to the regular expressions to detect emails, IPv4 and IPv6 addresses. To avoid introducing bias, the remaining 5100 files were randomly sampled from the dataset without pre-filtering.

We then annotated the dataset through Toloka Platform with 1,399 crowd-workers from 35 countries. To ensure that crowd-workers received fair compensation, we established an hourly pay rate of $7.30, taking into consideration different minimum wage rates across countries and their corresponding purchasing power. We limited annotation eligibility to countries where the hourly pay rate of $7.30 was equivalent to the highest minimum wage in the US ($16.50) in terms of purchasing power parity.

Considerations for Using the Data

When using this dataset, please be mindful of the data governance risks that come with handling personally identifiable information (PII). Despite sourcing the data from open, permissive GitHub repositories and having it annotated by fairly paid crowd-workers, it does contain sensitive details such as names, usernames, keys, emails, passwords, and IP addresses. To ensure responsible use for research within the open-source community, access to the dataset will be provided through a gated mechanism.

We expect researchers and developers working with the dataset to adhere to the highest ethical standards and employ robust data protection measures. To assist users in effectively detecting and masking PII, we've also released a PII model trained on this dataset. Our goal in providing access to both the dataset and the PII model is to foster the development of privacy-preserving AI technologies while minimizing potential risks related to handling PII.