--- pipeline_tag: text-generation base_model: gemma-2-27b-it-abliterated library_name: transformers --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/gemma-2-27b-it-abliterated-GGUF This is quantized version of [byroneverson/gemma-2-27b-it-abliterated](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) created using llama.cpp # Original Model Card --- base_model: google/gemma-2-27b-it pipeline_tag: text-generation license: gemma language: - en tags: - gemma - gemma-2 - chat - it - abliterated library_name: transformers --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/gemma-2-27b-it-abliterated-GGUF This is quantized version of [byroneverson/gemma-2-27b-it-abliterated](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) created using llama.cpp # Original Model Card # gemma-2-27b-it-abliterated ## Now accepting abliteration requests. If you would like to see a model abliterated, follow me and leave me a message with model link. This is a new approach for abliterating models using CPU only. I was able to abliterate this model using free kaggle processing with no accelerator. 1. Obtain refusal direction vector using a quant model with llama.cpp (llama-cpp-python and ggml-python). 2. Orthogonalize each .safetensors files directly from original repo and upload to a new repo. (one at a time) Check out the jupyter notebook for details of how this model was abliterated from gemma-2-27b-it. ![Logo](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated/resolve/main/logo.png "Logo")