--- license: mit language: - multilingual tags: - zero-shot-classification - text-classification - pytorch metrics: - accuracy - f1-score extra_gated_prompt: 'Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. If you use our models for your work or research, please cite this paper: Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434' extra_gated_fields: Name: text Country: country Institution: text E-mail: text Use case: text --- # xlm-roberta-large-italian-social-cap-v3 ## Model description An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `social` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/). ## How to use the model ```python from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large") pipe = pipeline( model="poltextlab/xlm-roberta-large-italian-social-cap-v3", task="text-classification", tokenizer=tokenizer, use_fast=False, token="" ) text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities." pipe(text) ``` ### Gated access Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead. ## Model performance The model was evaluated on a test set of 938 examples (10% of the available data).
Model accuracy is **0.63**. | label | precision | recall | f1-score | support | |:-------------|------------:|---------:|-----------:|----------:| | 0 | 0.61 | 0.69 | 0.65 | 62 | | 1 | 0.62 | 0.62 | 0.62 | 91 | | 2 | 0.75 | 0.73 | 0.74 | 37 | | 3 | 0.82 | 0.64 | 0.72 | 14 | | 4 | 0.63 | 0.66 | 0.64 | 29 | | 5 | 0.74 | 0.74 | 0.74 | 23 | | 6 | 0.59 | 0.77 | 0.67 | 26 | | 7 | 1 | 0.8 | 0.89 | 10 | | 8 | 0.66 | 0.62 | 0.64 | 40 | | 9 | 0.57 | 0.77 | 0.65 | 22 | | 10 | 0.51 | 0.68 | 0.58 | 44 | | 11 | 0.5 | 0.11 | 0.18 | 18 | | 12 | 0.45 | 0.39 | 0.42 | 36 | | 13 | 0 | 0 | 0 | 6 | | 14 | 1 | 0.31 | 0.47 | 13 | | 15 | 0 | 0 | 0 | 4 | | 16 | 0 | 0 | 0 | 3 | | 17 | 0.46 | 0.51 | 0.49 | 49 | | 18 | 0.58 | 0.62 | 0.6 | 158 | | 19 | 0 | 0 | 0 | 5 | | 20 | 0.71 | 0.61 | 0.66 | 36 | | 21 | 0.7 | 0.71 | 0.71 | 212 | | macro avg | 0.54 | 0.5 | 0.5 | 938 | | weighted avg | 0.62 | 0.63 | 0.62 | 938 | ## Inference platform This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research. ## Cooperation Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com). ## Debugging and issues This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually. If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.