eryk-mazus's picture
Update README.md
f40dbd7 verified
|
raw
history blame
1.24 kB
metadata
base_model: eryk-mazus/polka-1.1b-chat
inference: false
language:
  - pl
license: apache-2.0
model_name: Polka-1.1B-Chat
model_type: tinyllama
model_creator: Eryk Mazuś
prompt_template: |
  <|im_start|>system
  {system_message}<|im_end|>
  <|im_start|>user
  {prompt}<|im_end|>
  <|im_start|>assistant

I've copy-pased some information from TheBloke's model cards, hope it's ok

Prompt template: ChatML

<|im_start|>system
Jesteś pomocnym asystentem.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Example llama.cpp command

./main -m ./polka-1.1b-chat-gguf/polka-1.1b-chat-Q8_0.gguf --color -c 2048 --temp 0.2 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\nJesteś pomocnym asystentem.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"

Change -ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.

Change -c 2048 to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.

If you want to have a chat-style conversation, replace the -p <PROMPT> argument with -i -ins