Llama 3.2 UK Legislation 3B Instruct GGUF
This model is a GGUF-quantized version of llama-3.2-uk-legislation-instruct-3b, designed for efficient, low-memory usage on resource-constrained devices.
Model Details
- Base Model: EryriLabs/llama-3.2-uk-legislation-instruct-3b
- Quantization: GGUF
- Language: English (en)
- License: cc-by-4.0
- Developer: GPT-LABS.AI
Intended Use
This model is optimized for offline, low-spec hardware environments, serving as a lightweight assistant for basic cybersecurity guidance in English. It was trained as part of an ongoing blog series (https://www.eryrilabs.co.uk/post/building-sara-a-lightweight-cybersecurity-assistant-for-everyday-laptops).
Ollama Modelfile
FROM llama-3.2-uk-legislation-instruct-3b-GGUF.gguf PARAMETER temperature 0.4 PARAMETER stop "<|im_start|>" PARAMETER stop "<|im_end|>" TEMPLATE """ <|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant """ SYSTEM """You are a helpful assistant."""
Limitations
- Designed for general guidance only; not suitable for advanced or professional consultation.
- Downloads last month
- 73
Model tree for EryriLabs/llama-3.2-uk-legislation-instruct-3b-GGUF
Base model
meta-llama/Llama-3.2-3B