internlm2_5-7b-chat-abliterated

Version 1.1 (Updated 9/1/2024): Layer 17 is used for abliteration instead of 16. Refusal mitigation tends to work better with this layer. PCA and cosine similarity tests seem to agree.

Check out the jupyter notebook for details of how this model was abliterated from internlm2_5-7b-chat.

Please check out my newer abliteration of glm-4-9b-chat. It's jupyter notebook is a little more developed than this one.

Logo

Downloads last month
35
Safetensors
Model size
7.74B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for byroneverson/internlm2_5-7b-chat-abliterated

Finetuned
(13)
this model
Quantizations
2 models