Catalan BERTa (RoBERTa-large) finetuned for Named Entity Recognition.

Table of Contents

Click to expand

Model description

The multiner is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [BERTa] model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).

It has been trained with a dataset that contains 9 main types and 52 subtypes on all kinds of short texts, with almost 59K documents.

Intended uses and limitations

How to use

from transformers import pipeline

pipe = pipeline("ner", model="projecte-aina/multiner_ceil")
example = "George Smith Patton fué un general del Ejército de los Estados Unidos en Europa durante la Segunda Guerra Mundial. "

ner_entity_results = pipe(example, aggregation_strategy="simple")
print(ner_entity_results)

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Training

We used the NERC dataset in Catalan called Catalan Entity Identification and Linking for training and evaluation.

Evaluation

Accuracy was calculated using the development set, and reflects the non-balanced nature of the dataset.

Major types

Type Accuracy num. Instances in dev set
CW * 0.842 4551
GPE 0.914 19751
Other 0.69 2824
building 0.736 2188
event 0.739 3000
location 0.819 3408
organization 0.895 17285
person 0.903 21689
product 0.64 1038

*: Cultural Work

Subtypes

Type Accuracy num. Instances in dev set
CW-broadcastprogram 0.854 765
CW-film 0.809 549
CW-music 0.862 1027
CW-other 0.495 555
CW-painting 0.654 205
CW-writtenart 0.814 1450
GPE 0.914 19751
Other 0.69 2824
building-airport 0.733 176
building-governmentfacility 0.514 72
building-hospital 0.805 113
building-hotel 0.688 32
building-other 0.726 1585
building-religious 0.0 1
building-restaurant 0.458 48
building-shops 0.206 34
building-sportsfacility 0.74 127
event-attack/terrorism/militaryconflict 0.866 411
event-disaster 0.261 23
event-other 0.695 1069
event-political 0.527 444
event-protest 0.207 29
event-sportsevent 0.822 1024
location-bodiesofwater 0.865 673
location-island 0.457 140
location-mountain 0.781 515
location-other 0.757 1602
location-park 0.581 93
location-road/railway/highway/transit 0.805 385
organization-education 0.868 2097
organization-government 0.905 2939
organization-media 0.888 1963
organization-onlinebusiness 0.538 197
organization-other 0.788 4733
organization-politicalparty 0.956 2272
organization-privatecompany 0.849 1809
organization-religious 0.638 210
organization-sportsteam 0.946 1065
person-actor/director 0.797 1480
person-artist/author 0.853 5812
person-athlete 0.871 1306
person-group 0.485 699
person-influencer 0.0 17
person-other 0.811 8444
person-politician 0.863 3259
person-scholar/scientist 0.728 672
product-E-device 0.51 102
product-clothing 0.222 27
product-consumer_good 0.0 20
product-food 0.673 324
product-other 0.0 69
product-software 0.67 382
product-vehicle 0.825 114

Additional information

Author

Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center ([email protected])

Contact information

For further information, send an email to [email protected]

Copyright

Copyright (c) 2023 Language Technologies Unit (LangTech) at Barcelona Supercomputing Center

Licensing Information

Apache License, Version 2.0

Funding

This work/research has been promoted and financed by the Government of Catalonia through the Aina project.

Citation information

Disclaimer

Click to expand

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.

Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train projecte-aina/multiner_ceil

Collection including projecte-aina/multiner_ceil