Model Details

This is Qwen/Qwen2.5-1.5B with a vocabulary reduced to 32k entries using the Minivoc approach. The model has been created, tested, and evaluated by The Kaitchup.

All the details about the Minivoc approach and evaluation in this article: Introducing Minivoc: Faster and Memory-Efficient LLMs Through Vocabulary Reduction

  • Developed by: The Kaitchup
  • Language(s) (NLP): English
  • License: apache-2.0

Training Data

The Minivoc approach has two training steps. For this model, I used 0.2B tokens randomly sampled from HuggingFaceFW/fineweb-edu.

Downloads last month
14
Safetensors
Model size
1.36B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train kaitchup/Qwen2.5-1.5B-Minivoc-32k-v0.1a

Collection including kaitchup/Qwen2.5-1.5B-Minivoc-32k-v0.1a