BY_PINKSTACK.png

PRAM V2

πŸ§€ Which quant is right for you?

  • Q4: This model should be used for super low end devices like older phones or older laptops due to its very compact size, quality is okay but fully usable.
  • Q6: This model should be used on most modern devices, good quality and very quick responses.
  • Q8: This model should be used on most modern devices Responses are very high quality, but its a little slower than q6
  • BF16: This Lossless model should only be used if maximum quality is needed; it doesn't perform well speed wise, but text results are high quality.

Things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) πŸ§€

This PARM is based on Qwen 2.5 0.5B which has gotten extra reasoning training parameters so it would have similar outputs to qwen QwQ (only much, smaller.), We trained with this dataset. it is designed to run on any device, from your phone to high-end PC. that is why we've included a BF16 quant.

To use this model, you must use a service which supports the GGUF file format. Additionaly, this is the Prompt Template, it uses the qwen2 template.

{{- if .Suffix }}<|fim_prefix|>{{ .Prompt }}<|fim_suffix|>{{ .Suffix }}<|fim_middle|>
{{- else if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{- end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}

Or if you are using an anti prompt: <|end|><|assistant|>

Highly recommended to use with a system prompt.

Extra information

  • Developed by: Pinkstack
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-0.5b-instruct-bnb-4bit

This model was trained using Unsloth and Huggingface's TRL library.

Used this model? Don't forget to leave a like :)

Downloads last month
153
GGUF
Model size
494M params
Architecture
qwen2

4-bit

6-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.