Locally deployed models have poor performance. model:CodeLlama-34b-Instruct-hf

#283
by like123 - opened

When CodeLlama-34b-Instruct-hf was deployed with transformers and the default parameters were used for inference, it was found that the inference effect was very different from the hugging face, did the online inference do anything optimized? How are the model parameters and promot set?

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment