Model Card for EnvRoBERTa-base

Model Description

Based on this paper, this is the EnvRoBERTa-base language model. A language model that is trained to better understand environmental texts in the ESG domain.

Note: We generally recommend choosing the EnvironmentalBERT-base model since it is quicker, less resource-intensive and only marginally worse in performance.

Using the RoBERTa model as a starting point, the EnvRoBERTa-base Language Model is additionally pre-trained on a text corpus comprising environmental-related annual reports, sustainability reports, and corporate and general news.

More details can be found in the paper

@article{Schimanski23ESGBERT,
    title={{Bridiging the Gap in ESG Measurement: Using NLP to Quantify Environmental, Social, and Governance Communication}},
    author={Tobias Schimanski and Andrin Reding and Nico Reding and Julia Bingler and Mathias Kraus and Markus Leippold},
    year={2023},
    journal={Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514},
}
Downloads last month
14
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train ESGBERT/EnvRoBERTa-base