Lots of good models use Orca for their merges, however vanilla Orca has vocabulary size of 32003, 3 last tokens are ChatML tokens and a PAD token. This causes errors during a merge with models with standard 32000 vocabulary size.

I've removed those tokens from volabulary and resized model embeddings to mach 32000 standard size. So this model is ready to be used as a merge component in mergekit. It may not work on its own with ChatML template anymore.

model.resize_token_embeddings(32000)
Downloads last month
13
Safetensors
Model size
13B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TeeZee/Orca-2-13b_flat

Merges
7 models