Tokenizer and model vocab size different

#8
by abpani1994 - opened

(deberta): DebertaV2Model(
(embeddings): DebertaV2Embeddings(
(word_embeddings): Embedding(128100, 1024, padding_idx=0)

tokenizer.vocab_size = 128000

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment