Minivoc
Collection
7 items
•
Updated
This is Qwen/Qwen2.5-1.5B with a vocabulary reduced to 32k entries using the Minivoc approach. The model has been created, tested, and evaluated by The Kaitchup.
All the details about the Minivoc approach and evaluation in this article: Introducing Minivoc: Faster and Memory-Efficient LLMs Through Vocabulary Reduction
The Minivoc approach has two training steps. For this model, I used 0.2B tokens randomly sampled from HuggingFaceFW/fineweb-edu.