Model keeps generating

#39
by godaspeg - opened

I am using Yi-34b-chat with config from example :
max_length=2048,
do_sample=True,
repetition_penalty=1.3,
no_repeat_ngram_size=5,
temperature=0.7,
top_k=40,
top_p=0.8,

The model does not stop generating till max_length is reached, at some point the model is generating random tokens. how can i prevent the model from doing that and automatically stop if a question is answered/task is fullfilled?

Same problem!!!

Hi,

  1. Could you verify the eos_token_id in the generation function? It's set by default in the generation_config, and should be 7.
  2. I recommend using the default settings for the generation config, particularly the repetition_penalty. Given the presence of numerous special tokens such as stop token "<|im_end|>" in the prompt, they might not be generated anymore due to the penalty.

Hi Kai, thank you so much for your quick answer, using the settings from the Git Repo fixes the bug.

https://github.com/01-ai/Yi

godaspeg changed discussion status to closed

Sign up or log in to comment