huihui-ai commited on
Commit
a4b5354
·
verified ·
1 Parent(s): 5ebbe35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -0
README.md CHANGED
@@ -13,3 +13,37 @@ tags:
13
  This is an uncensored version of [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
14
  This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  This is an uncensored version of [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
14
  This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
15
 
16
+ ## ollama
17
+ 1. Download this model.
18
+ ```
19
+ huggingface-cli download huihui-ai/Marco-o1-abliterated --local-dir ./huihui-ai/Marco-o1-abliterated
20
+ ```
21
+ 2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) conversion program to convert Marco to gguf format.
22
+ ```
23
+ python convert_hf_to_gguf.py huihui-ai/Marco-o1-abliterated --outfile huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf --outtype f16
24
+ ```
25
+ 3. Quantitative model (llama-quantize needs to be compiled.)
26
+ ```
27
+ llama-quantize huihui-ai/Marco-o1-abliterated/ggml-model-f16.gguf huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf Q4_K_M
28
+ ```
29
+ 4. Get Marco-o1 model for reference.
30
+ ```
31
+ ollama pull marco-o1
32
+ ```
33
+ 5. Export Marco-o1 model parameters.
34
+ ```
35
+ ollama show marco-o1 --modelfile > Modelfile
36
+ ```
37
+ 6. Modify Modelfile, Remove all comment lines (indicated by #) before the "FROM" keyword. Replace the "FROM" with the following content.
38
+ ```
39
+ FROM huihui-ai/Marco-o1-abliterated/ggml-model-Q4_K_M.gguf
40
+ ```
41
+ 7. Use ollama create to then create the quantized model.
42
+ ```
43
+ ollama create -f Modelfile Marco-o1-abliterated
44
+ ```
45
+ 8. Run model
46
+ ```
47
+ ollama run Marco-o1-abliterated
48
+ ```
49
+