🛡️ Guardians of the Machine Translation Meta-Evaluation:
Sentinel Metrics Fall In!

           
       

This repository contains the SENTINELREF metric model pre-trained on Direct Assessments (DA) annotations and further fine-tuned on Multidimensional Quality Metrics (MQM) data. For details on how to use our sentinel metric models, check our GitHub repository.

Usage

After having installed our repository package, you can use this model within Python in the following way:

from sentinel_metric import download_model, load_from_checkpoint

model_path = download_model("sapienzanlp/sentinel-ref-mqm")
model = load_from_checkpoint(model_path)

data = [
    {"ref": "There's no place like home."},
    {"ref": "Toto, I've a feeling we're not in Kansas anymore."}
]

output = model.predict(data, batch_size=8, gpus=1)

Output:

# Segment scores
>>> output.scores
[0.5577929019927979, 0.3894208073616028]

# System score
>>> output.system_score
0.4736068546772003

Cite this work

This work has been published at ACL 2024 (Main Conference). If you use any part, please consider citing our paper as follows:

@inproceedings{perrella-etal-2024-guardians,
    title = "Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!",
    author = "Perrella, Stefano  and Proietti, Lorenzo  and Scir{\`e}, Alessandro  and Barba, Edoardo  and Navigli, Roberto",
    editor = "Ku, Lun-Wei  and Martins, Andre  and Srikumar, Vivek",
    booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.acl-long.856",
    pages = "16216--16244",
}

License

This work is licensed under Creative Commons Attribution-ShareAlike-NonCommercial 4.0.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including sapienzanlp/sentinel-ref-mqm