Model Card for TowerInstruct-WMT24-Chat-7B

Model Details

Model Description

TowerInstruct-WMT24-Chat-7B is a language model that results from fine-tuning TowerBase on TowerBlocks and the WMT24 Chat MT Shared task training set.

TowerInstruct-WMT24-Chat-7B was the best submission of the shared task, winning on all 10 language pairs according to human evaluation (see the task's findings paper here).

It is specifically tailoured for context-aware translation of customer support chats.

Check out our paper for more details and information on training and data.

Information on model usage, out-of-scope usages, risks, etc... are the same as the model cards of the TowerInstruct models.

Citation

@inproceedings{pombal2024improving,
  title={Improving Context Usage for Translating Bilingual Customer Support Chat with Large Language Models},
  author={Pombal, Jos{\'e} and Agrawal, Sweta and Martins, Andr{\'e} FT},
  booktitle={Proceedings of the Ninth Conference on Machine Translation},
  pages={993--1003},
  year={2024}
}

Built with Axolotl

Downloads last month
102
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Unbabel/TowerInstruct-WMT24-Chat-7B

Quantizations
2 models

Collection including Unbabel/TowerInstruct-WMT24-Chat-7B