|
--- |
|
language: |
|
- en |
|
tags: |
|
- climate |
|
- policy |
|
- legal |
|
size_categories: |
|
- 1M<n<10M |
|
license: cc |
|
dataset_info: |
|
features: |
|
- name: family_slug |
|
dtype: string |
|
- name: types |
|
sequence: string |
|
- name: role |
|
dtype: string |
|
- name: block_index |
|
dtype: int64 |
|
- name: date |
|
dtype: date32 |
|
- name: geography_iso |
|
dtype: string |
|
- name: document_name |
|
dtype: string |
|
- name: variant |
|
dtype: string |
|
- name: type_confidence |
|
dtype: float64 |
|
- name: document_languages |
|
sequence: string |
|
- name: text_block_id |
|
dtype: string |
|
- name: document_source_url |
|
dtype: string |
|
- name: author_is_party |
|
dtype: bool |
|
- name: type |
|
dtype: string |
|
- name: coords |
|
sequence: |
|
sequence: float64 |
|
- name: author |
|
sequence: string |
|
- name: family_name |
|
dtype: string |
|
- name: status |
|
dtype: string |
|
- name: collection_id |
|
dtype: string |
|
- name: family_id |
|
dtype: string |
|
- name: language |
|
dtype: string |
|
- name: page_number |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: has_valid_text |
|
dtype: bool |
|
- name: document_id |
|
dtype: string |
|
- name: translated |
|
dtype: bool |
|
- name: document_content_type |
|
dtype: string |
|
- name: document_md5_sum |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 1278730693 |
|
num_examples: 1578645 |
|
download_size: 228690459 |
|
dataset_size: 1278730693 |
|
--- |
|
|
|
# Global Stocktake Open Data |
|
|
|
This repo contains the data for the first [UNFCCC Global Stocktake](https://unfccc.int/topics/global-stocktake). The data consists of document metadata from sources relevant to the Global Stocktake process, as well as full text parsed from the majority of the documents. |
|
|
|
The files in this dataset are as follows: |
|
|
|
- `metadata.csv`: a CSV containing document metadata for each document we have collected. **This metadata may not be the same as what's stored in the source databases** – we have cleaned and added metadata where it's corrupted or missing. |
|
- `full_text.parquet`: a parquet file containing the full text of each document we have parsed. Each row is a text block (paragraph) with all the associated text block and document metadata. |
|
|
|
A research tool you can use to view this data and the results of some classifiers run on it is at [gst1.org](https://gst1.org). |
|
|
|
This data is licensed according to CC BY 4.0, which is a license that represents the terms at the source repositories. |
|
|
|
**Contents** |
|
|
|
- [Sources and data completeness](#sources-and-data-completeness) |
|
- [Field descriptions](#field-descriptions) |
|
- [Known issues](#known-issues) |
|
- [Usage in Python](#usage-in-python) |
|
- [Loading metadata CSV](#loading-metadata-csv) |
|
- [Loading text block data](#loading-text-block-data) |
|
|
|
--- |
|
|
|
## Sources and data completeness |
|
|
|
This dataset contains documents from the following sources: |
|
|
|
* [Global Stocktake Information Portal](https://unfccc.int/topics/global-stocktake/information-portal) |
|
* [NDC Registry](https://unfccc.int/NDCREG) |
|
* [Adaptation Communications Registry](https://unfccc.int/ACR) |
|
* [Fast-Start Finance Country Reports](https://unfccc.int/climatefinance?submissions) |
|
* [IPCC Reports](https://www.ipcc.ch/reports/) |
|
|
|
The following Global Stocktake relevant data sources are not yet in this dataset: |
|
|
|
* [National Adaptation Plan Central Portal](https://napcentral.org/submitted-naps) |
|
* [TNA Country Reports](https://unfccc.int/ttclear/tna/reports.html) |
|
|
|
|
|
### Data completeness |
|
|
|
The last refresh of the data was on **2023-10-18**. |
|
|
|
We currently only parse text out of PDFs. Any non-PDF file will only be referenced in `metadata.csv`, and not be referenced in `full_text.parquet`. |
|
|
|
We have yet to process approximately 150 documents of the 1700 documents due to formatting issues. We are working on resolving this issue as soon as possible. [See the document list here](https://labs.climatepolicyradar.org/global-stocktake/UNPROCESSED_DOCUMENTS.html). |
|
|
|
## Data model |
|
|
|
This dataset contains individual documents that are grouped into 'document families'. |
|
|
|
The way to think of is as follows: |
|
|
|
* Each row in the dataset is a physical document. A physical document is a single document, in any format. |
|
* All physical documents belong to document families. A document family is one or more physical documents, centred around a main document, which jointly contain all relevant information about the main document. For example, where a document has a translation, amendments or annexes, those files are stored together as a family. |
|
|
|
## License & Usage |
|
|
|
Please read our [Terms of Use](https://app.climatepolicyradar.org/terms-of-use), including any specific terms relevant to commercial use. Contact [email protected] with any questions. |
|
|
|
## Field descriptions |
|
|
|
- `author`: document author (str) |
|
- `author_is_party`: whether the author is a Party (national government) or not (bool) |
|
- `block_index`: the index of a text block in a document. Starts from 0 (int) |
|
- `coords`: coordinates of the text block on the page |
|
- `date`: publication date of the document |
|
- `document_content_type`: file type. We have only parsed text from PDFs. |
|
- `document_id`: unique identifier for a document |
|
- `document_family_id`: see *data model* section above |
|
- `document_family_slug`: see *data model* section above |
|
- `document_md5_sum`: md5sum of the document's content |
|
- `document_name`: document title |
|
- `document_source_url`: URL for document |
|
- `document_variant`: used to identify translations. In `[nan, 'Translation', 'Original Language']` |
|
- `has_valid_text`: our heuristic about whether text is valid or not in the document based on the parser |
|
- `language`: language of the text block. Either `en` or `nan` - see known issues |
|
- `page_number`: page number of text block (0-indexed) |
|
- `text`: text in text block |
|
- `text_block_id`: identifier for a text block which is unique per document |
|
- `translated`: whether we have machine-translated the document to English. Where we have translated documents, both the original and translated exist. |
|
- `type`: type of text block. In `["Text", "Title", "List", "Table", "Figure","Ambiguous"]` |
|
- `type_confidence`: confidence from that the text block is of the labelled type |
|
- `types`: list of document types e.g. Nationally Determined Contribution, National Adaptation Plan (list[str]) |
|
- `version`: in `['MAIN', 'ANNEX', 'SUMMARY', 'AMENDMENT', 'SUPPORTING DOCUMENTATION', 'PREVIOUS VERSION']` |
|
|
|
## Known issues |
|
|
|
* Author names are sometimes corrupted |
|
* Text block languages are sometimes missing or marked as `nan` |
|
|
|
## Usage in Python |
|
|
|
The easiest way to access this data via the terminal is to run `git clone <this-url>`. |
|
|
|
### Loading metadata CSV |
|
|
|
``` py |
|
metadata = pd.read_csv("metadata.csv") |
|
``` |
|
|
|
### Loading text block data |
|
|
|
Once loaded into a Huggingface Dataset or Pandas DataFrame object the parquet file can be converted to other formats, e.g. Excel, CSV or JSON. |
|
|
|
``` py |
|
# Using huggingface (easiest) |
|
dataset = load_dataset("ClimatePolicyRadar/global-stocktake-documents") |
|
|
|
# Using pandas |
|
text_blocks = pd.read_parquet("full_text.parquet") |
|
``` |