Edit model card

The aim is to compress the mT5-small model to leave only the Ukrainian language and some basic English.

Reproduced the similar result (but with another language) from this medium article.

Results:

  • 300M params -> 75M params (75%)
  • 250K tokens -> 8900 tokens
  • 1.1GB size model -> 0.3GB size model
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.