app350_llama_format / README.md
CodeHima's picture
Upload dataset
3a572b2 verified
metadata
language: en
license: mit
task_categories:
  - text-generation
  - text-classification
tags:
  - llm
  - conversations
  - llama
  - finetuning
  - privacy-policies
  - dataset
datasets:
  - CodeHima/APP_350_LLM_Formatted
metrics:
  - accuracy
  - f1
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: conversations
      dtype: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 8446123
      num_examples: 12405
    - name: validation
      num_bytes: 1059045
      num_examples: 1551
    - name: test
      num_bytes: 1075320
      num_examples: 1551
  download_size: 3915927
  dataset_size: 10580488

APP-350 Formatted Dataset for LLM Fine-tuning

Dataset Summary

The APP-350 dataset consists of structured conversation pairs formatted for fine-tuning Large Language Models (LLMs) like LLaMA. This dataset includes questions and responses between users and an AI assistant. The dataset is particularly designed for privacy policy analysis and fairness evaluation, allowing models to learn from annotated interactions regarding privacy practices.

The conversations are organized into the following structure:

  • User Prompt: The user initiates the conversation with a question or request.
  • Assistant Response: The AI assistant provides a detailed response, including an assessment of the privacy policy clause.

Intended Use

This dataset is ideal for training and fine-tuning conversational models, particularly those aimed at:

  • Privacy policy analysis
  • Legal document interpretation
  • Fairness evaluation in legal and compliance documents

The dataset can also be used to develop models that specialize in understanding privacy-related practices and enhancing LLM performance in this domain.

Dataset Structure

Each entry in the dataset is structured as a conversation between a user and an assistant:

[
  {
    "content": "Analyze the following clause from a privacy policy and determine if it's fair or unfair...",
    "role": "user"
  },
  {
    "content": "This clause is fair. The privacy practices mentioned are: nan.",
    "role": "assistant"
  }
]

Each record contains:

  • content: The text of the prompt or response.
  • role: Specifies whether the content is from the 'user' or the 'assistant'.

Example Entry

{
  "content": "How do astronomers determine the original wavelength of light emitted by a celestial body at rest...",
  "role": "user"
},
{
  "content": "Astronomers make use of the unique spectral fingerprints of elements found in stars...",
  "role": "assistant"
}

Collection Process

This dataset was collected from various privacy policy clauses and conversations annotated with fairness labels. The dataset has been structured to reflect user-assistant interactions, making it suitable for training conversational AI systems.

Licensing

The dataset is made available under the MIT License, which allows for flexible use, modification, and distribution of the dataset.

Citation

If you use this dataset, please cite it as follows:

@dataset{app350_llm_formatted,
  title = {APP-350 Formatted Dataset for LLM Fine-tuning},
  author = {Himanshu Mohanty},
  year = {2024},
  url = {https://huggingface.co/datasets/CodeHima/APP_350_LLM_Formatted},
  license = {MIT}
}