TinyPascal

Vitted GGUF models known to work with TinyPascal GenAI.

Hermes-2-Pro-Llama-3-8B GGUF

How to define this model in TinyPascal GenAI:

GenAI_DefineModel(
  'Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf',     // model filename
  'Hermes-2-Pro-Llama-3-8B-Q4_K_M',          // model reference name
   8000,                                     // model context length
   '<|im_start|>{role}{content}<|im_end|>',  // model template
   '<|im_start|>assistant'                   // model template end 
);

Phi-3.1-mini-4k-instruct GGUF

How to define this model in TinyPascal GenAI:

GenAI_DefineModel(
  'Phi-3.1-mini-4k-instruct-Q4_K_M.gguf',   // model filename
  'Phi-3.1-mini-4k-instruct-Q4_K_M',        // model reference name
   4000,                                    // model context length
 '<|{role}|> {content}<|end|>',             // model template
 '<|assistant|>');                          // model template end
Downloads last month
7
GGUF
Model size
8.03B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .