metadata
base_model: internlm/internlm2_5-7b-chat
pipeline_tag: text-generation
license: apache-2.0
language:
- en
- zh
library_name: transformers
internlm2_5-7b-chat-abliterated
Version 1.1 (Updated 9/1/2024): Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.
Check out the jupyter notebook for details of how this model was abliterated from internlm2_5-7b-chat.
Please check out my newer abliteration of glm-4-9b-chat. It's jupyter notebook is a little more developed than this one.