Edit model card

The aim is to compress the mT5-large model to leave only the Ukrainian language and some basic English.

Reproduced the similar result (but with another language) from this medium article.

Results:

  • 1.2B params -> 779M params (37%)
  • 250K tokens -> 8900 tokens
  • 4.6GB size model -> 2.9GB size model
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.