Great model, but..
#1
by
Jeximo
- opened
Hi, thank you as always. This model is exceptional, but I've noticed through llama.cpp that the Assistant will eventually get to a point that it just stops generating before max context.
It's randomly stopped around 700, 1100, 1200 a few times even with --context 2048
It's not only your repo, I've downloaded the identical model from 2 other repos and experienced the same abrupt stop.
I dunno if there's a problem with either the dataset, or llama.cpp
Any experience with this? Hear any similar issues?
Thank you.
Jeximo
changed discussion status to
closed