mlx-community/TowerInstruct-v0.1-bfloat16-mlx
This model was converted to MLX format from Unbabel/TowerInstruct-v0.1
.
Refer to the original model card for more details on the model.
Intended uses & limitations (from the original model card)
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset (TowerBlocks), which contains a diverse range of data sources:
- Translation (sentence and paragraph-level)
- Automatic Post Edition
- Machine Translation Evaluation
- Context-aware Translation
- Terminology-aware Translation
- Multi-reference Translation
- Named-entity Recognition
- Paraphrase Generation
- Synthetic Chat data
- Code instructions
You can find the dataset and all data sources of TowerBlocks here.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/TowerInstruct-v0.1-bfloat16-mlx")
prompt="Translate the following text from Portuguese into French.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nFrench:"
response = generate(model, tokenizer, prompt=prompt, verbose=True)
# Un groupe d'investigateurs a lancé un nouveau modèle pour les tâches liées à la traduction.
- Downloads last month
- 17
Inference API (serverless) does not yet support mlx models for this pipeline type.