|
--- |
|
license: apache-2.0 |
|
library_name: transformers |
|
base_model: AIDC-AI/Marco-o1 |
|
tags: |
|
- abliterated |
|
- uncensored |
|
--- |
|
|
|
# huihui-ai/Marco-o1-abliterated |
|
|
|
|
|
This is an uncensored version of [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). |
|
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. |
|
|
|
## ollama |
|
|
|
You can use [huihui_ai/marco-o1-abliterated](https://ollama.com/huihui_ai/marco-o1-abliterated) directly, |
|
``` |
|
ollama run huihui_ai/marco-o1-abliterated |
|
``` |
|
|
|
or create your own model using the following methods. |
|
|
|
1. Download this model. |
|
``` |
|
huggingface-cli download huihui-ai/Marco-o1-abliterated --local-dir ./huihui-ai/Marco-o1-abliterated |
|
``` |
|
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) conversion program to convert Marco-o1 to gguf format. |
|
``` |
|
python convert_hf_to_gguf.py huihui-ai/Marco-o1-abliterated --outfile huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf --outtype f16 |
|
``` |
|
3. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.), |
|
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp). |
|
``` |
|
llama-quantize huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf Q4_K_M |
|
``` |
|
4. Get Marco-o1 model for reference. |
|
``` |
|
ollama pull marco-o1 |
|
``` |
|
5. Export Marco-o1 model parameters. |
|
``` |
|
ollama show marco-o1 --modelfile > Modelfile |
|
``` |
|
6. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content. |
|
``` |
|
FROM huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf |
|
``` |
|
7. Use ollama to create the model. |
|
``` |
|
ollama create -f Modelfile Marco-o1-abliterated |
|
``` |
|
8. Run the model |
|
``` |
|
ollama run Marco-o1-abliterated |
|
``` |
|
|
|
|