YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Llama3-TAIDE在使用基於llama.ccp(如ollama)的本地執行時會連續生成token 無法正常對話,修正stop token,讓模型對話可以正常對話 When Llama3-TAIDE is executed locally using software based on llama.cpp (such as ollama), it continuously generates tokens and is unable to converse properly. Correcting the stop token resolved this issue, allowing normal conversation.


license: other license_name: same-as-taide license_link: LICENSE

Downloads last month
65
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.