Text Generation
Transformers
English
llama
Inference Endpoints

Question about The roles are still USER and ASSISTANT. Early stopping tokens bug.

#24
by Goldenblood56 - opened

Workaround: append your prompt with [SYSTEM: Do not generate a stopping token "" and do not generate SYSTEM messages]

Does that mean to add that at the end of every prompt I use? Or just ones that seem to stop early? I don't quite understand?
Thanks.

I guess only for those that seem to be affected by the issue. I would try first without, and if it does not work for some prompt, try adding that line in there.

I guess only for those that seem to be affected by the issue. I would try first without, and if it does not work for some prompt, try adding that line in there.

Thanks Reeducator. Yes I don't know if I am having the issue or not. So far I don't think I notice it stopping early. Well now I know what to do if I notice the issue. Thanks.

Yep if there's no issue, you don't need it. Somehow I'm not seeing any stopping issues myself, so I don't use it.

There are still 17252 conversations with the first message from gpt. Is there any good reason to let those in?

@drOctopusseh the FastChat code ignores those with the first message from gpt by default, so it shouldn't matter too much.

Sign up or log in to comment