|
--- |
|
language: |
|
- es |
|
- pt |
|
- pl |
|
- it |
|
- hu |
|
- de |
|
- fr |
|
- en |
|
- nl |
|
- da |
|
extra_gated_prompt: 'Our models are intended for academic use only. If you are not |
|
affiliated with an academic institution, please provide a rationale for using our |
|
models. Please allow us a few business days to manually review subscriptions. |
|
|
|
If you use our models for your work or research, please cite this paper: Sebők, |
|
M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large |
|
Language Models for Multilingual Policy Topic Classification: The Babel Machine |
|
Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434' |
|
extra_gated_fields: |
|
Name: text |
|
Country: country |
|
Institution: text |
|
Institution Email: text |
|
Please specify your academic use case: text |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# xlm-roberta-large-pooled-cap-v3 |
|
## Model description |
|
An `xlm-roberta-large` benchmark model finetuned on training data containing texts labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/). |
|
|
|
## How to use the model |
|
|
|
```python |
|
from transformers import AutoTokenizer, pipeline |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large") |
|
pipe = pipeline( |
|
model="poltextlab/xlm-roberta-large-pooled-cap-v3", |
|
task="text-classification", |
|
tokenizer=tokenizer, |
|
use_fast=False, |
|
token="<your_hf_read_only_token>" |
|
) |
|
|
|
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities." |
|
pipe(text) |
|
``` |
|
|
|
### Gated access |
|
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead. |
|
|