Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
DOI:
Libraries:
Datasets
pandas
License:
kdutia's picture
Update README.md
faf1c03
|
raw
history blame
3.86 kB
metadata
language:
  - en
tags:
  - climate
  - policy
  - legal
size_categories:
  - 1M<n<10M

Global Stocktake Open Data

This repo contains the data for the first UNFCCC Global Stocktake. The data consists of document metadata from sources relevant to the Global Stocktake process, as well as full text parsed from the majority of the documents.

The files in this dataset are as follows:

  • metadata.csv: a CSV containing document metadata for each document we have collected. This metadata may not be the same as what's stored in the source databases – we have cleaned and added metadata where it's corrupted or missing.
  • full_text.parquet: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata.
  • full_text.jsonl: a JSON file containing the same data as the parquet file. It's recommended to use the parquet file as it stores the data 10x more efficiently.

A research tool you can use to view this data and the results of some classifiers run on it is at gst1.org.

View our methodology.

Contents


Citing this data

@misc{Climate Policy Radar, title={Dataset for the first Global Stocktake}, url={http://gst1.org/}, author={Climate Policy Radar}}

Field descriptions

  • author: document author (str)
  • author_is_party: whether the author is a Party (national government) or not (bool)
  • block_index: the index of a text block in a document. Starts from 0 (int)
  • coords: coordinates of the text block on the page
  • date: publication date of the document
  • document_content_type: file type. We have only parsed text from PDFs.
  • document_id: unique identifier for a document
  • document_md5_sum: md5sum of the document's content
  • document_name: document title
  • document_source_url: URL for document
  • document_variant: used to identify translations. In [nan, 'Translation', 'Original Language']
  • has_valid_text: our heuristic about whether text is valid or not in the document based on the parser
  • language: language of the text block. Either en or nan - see known issues
  • page_number: page number of text block (0-indexed)
  • text: text in text block
  • text_block_id: identifier for a text block which is unique per document
  • translated: whether we have machine-translated the document to English. Where we have translated documents, both the original and translated exist.
  • type: type of text block. In ["Text", "Title", "List", "Table", "Figure","Ambiguous"]
  • type_confidence: confidence from that the text block is of the labelled type
  • types: list of document types e.g. Nationally Determined Contribution, National Adaptation Plan (list[str])
  • version: in ['MAIN', 'ANNEX', 'SUMMARY', 'AMENDMENT', 'SUPPORTING DOCUMENTATION', 'PREVIOUS VERSION']

Known issues

  • Author names are sometimes corrupted
  • Text blocks in non-English languages are currently missing or marked as nan

Usage in Python

Loading metadata CSV

metadata = pd.read_csv("metadata.csv")

Loading text block data

As mentioned at the top of this README the parquet file is recommended over JSON as it stores the data much more (10x) more efficiently, meaning lower load on your system's memory requirements.

# Reading from parquet
text_blocks = pd.read_parquet("full_text.parquet")

# Reading from jsonl
text_blocks = pd.read_json("full_text.jsonl", lines=True)