language:
- en
tags:
- climate
- policy
- legal
size_categories:
- 1M<n<10M
license: cc
dataset_info:
features:
- name: family_slug
dtype: string
- name: types
sequence: string
- name: role
dtype: string
- name: block_index
dtype: int64
- name: date
dtype: date32
- name: geography_iso
dtype: string
- name: document_name
dtype: string
- name: variant
dtype: string
- name: type_confidence
dtype: float64
- name: document_languages
sequence: string
- name: text_block_id
dtype: string
- name: document_source_url
dtype: string
- name: author_is_party
dtype: bool
- name: type
dtype: string
- name: coords
sequence:
sequence: float64
- name: author
sequence: string
- name: family_name
dtype: string
- name: status
dtype: string
- name: collection_id
dtype: string
- name: family_id
dtype: string
- name: language
dtype: string
- name: page_number
dtype: int64
- name: text
dtype: string
- name: has_valid_text
dtype: bool
- name: document_id
dtype: string
- name: translated
dtype: bool
- name: document_content_type
dtype: string
- name: document_md5_sum
dtype: string
splits:
- name: train
num_bytes: 1278730693
num_examples: 1578645
download_size: 228690459
dataset_size: 1278730693
Global Stocktake Open Data
This repo contains the data for the first UNFCCC Global Stocktake. The data consists of document metadata from sources relevant to the Global Stocktake process, as well as full text parsed from the majority of the documents.
The files in this dataset are as follows:
metadata.csv
: a CSV containing document metadata for each document we have collected. This metadata may not be the same as what's stored in the source databases – we have cleaned and added metadata where it's corrupted or missing.full_text.parquet
: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
A research tool you can use to view this data and the results of some classifiers run on it is at gst1.org.
This data is licensed according to CC BY 4.0, which is a license that represents the terms at the source repositories.
Contents
Sources and data completeness
This dataset contains documents from the following sources:
- Global Stocktake Information Portal
- NDC Registry
- Adaptation Communications Registry
- Fast-Start Finance Country Reports
- IPCC Reports
The following Global Stocktake relevant data sources are not yet in this dataset:
Data completeness
The last refresh of the data was on 2023-10-18.
We currently only parse text out of PDFs. Any non-PDF file will only be referenced in metadata.csv
, and not be referenced in full_text.parquet
.
We have yet to process approximately 150 documents of the 1700 documents due to formatting issues. We are working on resolving this issue as soon as possible. See the document list here.
Data model
This dataset contains individual documents that are grouped into 'document families'.
The way to think of is as follows:
- Each row in the dataset is a physical document. A physical document is a single document, in any format.
- All physical documents belong to document families. A document family is one or more physical documents, centred around a main document, which jointly contain all relevant information about the main document. For example, where a document has a translation, amendments or annexes, those files are stored together as a family.
License & Usage
Please read our Terms of Use, including any specific terms relevant to commercial use. Contact [email protected] with any questions.
Field descriptions
author
: document author (str)author_is_party
: whether the author is a Party (national government) or not (bool)block_index
: the index of a text block in a document. Starts from 0 (int)coords
: coordinates of the text block on the pagedate
: publication date of the documentdocument_content_type
: file type. We have only parsed text from PDFs.document_id
: unique identifier for a documentdocument_family_id
: see data model section abovedocument_family_slug
: see data model section abovedocument_md5_sum
: md5sum of the document's contentdocument_name
: document titledocument_source_url
: URL for documentdocument_variant
: used to identify translations. In[nan, 'Translation', 'Original Language']
has_valid_text
: our heuristic about whether text is valid or not in the document based on the parserlanguage
: language of the text block. Eitheren
ornan
- see known issuespage_number
: page number of text block (0-indexed)text
: text in text blocktext_block_id
: identifier for a text block which is unique per documenttranslated
: whether we have machine-translated the document to English. Where we have translated documents, both the original and translated exist.type
: type of text block. In["Text", "Title", "List", "Table", "Figure","Ambiguous"]
type_confidence
: confidence from that the text block is of the labelled typetypes
: list of document types e.g. Nationally Determined Contribution, National Adaptation Plan (list[str])version
: in['MAIN', 'ANNEX', 'SUMMARY', 'AMENDMENT', 'SUPPORTING DOCUMENTATION', 'PREVIOUS VERSION']
Known issues
- Author names are sometimes corrupted
- Text block languages are sometimes missing or marked as
nan
Usage in Python
The easiest way to access this data via the terminal is to run git clone <this-url>
.
Loading metadata CSV
metadata = pd.read_csv("metadata.csv")
Loading text block data
Once loaded into a Huggingface Dataset or Pandas DataFrame object the parquet file can be converted to other formats, e.g. Excel, CSV or JSON.
# Using huggingface (easiest)
dataset = load_dataset("ClimatePolicyRadar/global-stocktake-documents")
# Using pandas
text_blocks = pd.read_parquet("full_text.parquet")