Gemma-3-27B Instruct Uncensored 8-bit MLX

Uncensored version of Gemma 3 27B.

This model was converted to MLX format from nidum/Nidum-gemma-3-27B-it-Uncensored using mlx-vlm version 0.1.19.

Refer to the original model card and uncensored model for more details on the model.

Technical Details

Supports a context length of 128k tokens, with a max output of 8192.

Multimodal supporting images normalized to 896 x 896 resolution.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-8bit --max-tokens 128 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
82
Safetensors
Model size
8.42B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/gemma-3-27b-it-uncensored-mlx-8bit

Finetuned
(32)
this model

Collection including TheCluster/gemma-3-27b-it-uncensored-mlx-8bit