SciMaker commited on
Commit
a85b9cc
·
verified ·
1 Parent(s): 849f42a

The original model was unable to converse properly. Fixing the 'correct stop token' issue allowed the model to converse normally

Browse files

When Llama3-TAIDE is executed locally using software based on llama.cpp (such as ollama), it continuously generates tokens and is unable to converse properly. Correcting the stop token resolved this issue, allowing normal conversation

Llama3-TAIDE在使用基於llama.ccp(如ollama)的本地執行時會連續生成token 無法正常對話,修正stop token,讓模型對話可以正常對話

Files changed (2) hide show
  1. .gitattributes +1 -0
  2. taide-8b-a.3-q4_k_m_fix.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ taide-8b-a.3-q4_k_m_fix.gguf filter=lfs diff=lfs merge=lfs -text
taide-8b-a.3-q4_k_m_fix.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a78cf6a86722f996f0c372bacb7b3176b62f16ebb6fe47975fe0eda2b473c3f
3
+ size 4921247328