download.png

InternLM2-chat-20B-ToxicRP-QLORA-Merged

This Model was Finetuned by me, using the Machine Power of g4rg. Big Thanks to all people that helped me. Do whatever you want with this Model, just dont do anything illegal.

non Quantized here: Aculi/InternLM2-Chat-20B-ToxicRP

Have fun

This Model uses CHATML btw.

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 48]
    model: output/intervitens_internlm2-limarp-chat-20b-2+Fischerboot/InternLM2-ToxicRP-QLORA-4Bit
Downloads last month
58
GGUF
Model size
19.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for Aculi/InternLM2-Chat-20B-ToxicRP-GGUF

Collection including Aculi/InternLM2-Chat-20B-ToxicRP-GGUF