File size: 2,042 Bytes
d796890 5ebbe35 d796890 a4b5354 da12c57 d0f41f6 886f66d dcb5dfb da12c57 a4b5354 7bb4f48 a4b5354 774554a 99d9278 a4b5354 dec9ce6 a4b5354 dec9ce6 a4b5354 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: apache-2.0
library_name: transformers
base_model: AIDC-AI/Marco-o1
tags:
- abliterated
- uncensored
---
# huihui-ai/Marco-o1-abliterated
This is an uncensored version of [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## ollama
You can use [huihui_ai/marco-o1-abliterated](https://ollama.com/huihui_ai/marco-o1-abliterated) directly,
```
ollama run huihui_ai/marco-o1-abliterated
```
or create your own model using the following methods.
1. Download this model.
```
huggingface-cli download huihui-ai/Marco-o1-abliterated --local-dir ./huihui-ai/Marco-o1-abliterated
```
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) conversion program to convert Marco-o1 to gguf format.
```
python convert_hf_to_gguf.py huihui-ai/Marco-o1-abliterated --outfile huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf --outtype f16
```
3. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
```
llama-quantize huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf Q4_K_M
```
4. Get Marco-o1 model for reference.
```
ollama pull marco-o1
```
5. Export Marco-o1 model parameters.
```
ollama show marco-o1 --modelfile > Modelfile
```
6. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
```
FROM huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf
```
7. Use ollama to create the model.
```
ollama create -f Modelfile Marco-o1-abliterated
```
8. Run the model
```
ollama run Marco-o1-abliterated
```
|