Model review

#4
by Nicedick228 - opened

Awful! Besides the fact that on the first launch the model started writing nonsense about some dataset, the second time it didn't understand the slang greeting in Russian, which most models understand. Finally, it turned out to be censored and refused to write rap with swear words and insults. What is the point of this model?
photo_2024-11-05_17-27-58.jpg

Please note this model is not designed to be used in Russian.

REASON: The models that are used to de-censor it are in English , and so are the training datasets as far as I am aware. This is why the model card indicates language as
English only, even thou Llama 3.2 Instruct (the base model) supports several languages.

However there are other factors too:

1 - Prose controls setup the de-censorship in key situations - see examples on the main model card.
2 - TEmp / Rep pen AFFECT censorship / instruction following.
3 - Some quants perform better.

Finally; it appears you are using Olama with default settings - which may not result in the generation quality you are looking for.

You may want to check out:
https://lmstudio.ai
and/or:
https://github.com/oobabooga/text-generation-webui
and/or:
https://github.com/ggerganov/llama.cpp
(running llama.cpp in server mode is more complex than the other two programs and Ollama)

These programs will give you far better control on generation and may yield results, regardless of the language issue noted.

Sign up or log in to comment