Abstract

A LLM model free from ethical direction, social, societal, racial, or political allegiance, concern for legality, morality, ethics, and regard for individual well-being. This unbiased and unaligned tool could be applied in various domains such as novel creation, content generation, translation, and summarization without the influence of usual constraints of other models. Thus, it becomes possible to develop more accurate and reliable text-based systems that unlock new possibilities for language processing and generation. To construct such an AI, one must draw upon a vast multitude of data from diverse fields including psychology, philosophy, sociology, neuroscience, English and world literature, world languages, and grammar. By carefully designing traits, values, and beliefs, it is possible to shape the AI's worldview and thought processes through artificially constructed datasets and specific model merging techniques. However, incremental testing will be necessary to refine the model and ensure its direction remains amoral and unaligned as it progresses to a final product.

Merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear DARE merge method using MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k as a base.

Models Merged

The following models were included in the merge:

Configuration

==================================================================== MrRobotoAI/MrRoboto-BASE-D-Unholy-8b-64k #Specify writing style (approx. 4.4k word response)

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-formality-informal-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-length-long-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-sarcasm-more-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-type-descriptive-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-type-expository-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-type-narrative-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+kik41/lora-type-persuasive-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-C-Unholy-8b-64k+DreadPoor/Everything-COT-8B-r128-LoRA
  - model: nothingiisreal/L3-8B-Stheno-Horny-v3.3-32K
  - model: MrRobotoAI/HEL-v0.8-8b-LONG-DARK+jspr/smut_llama_8b_smutromance_32k_peft
merge_method: model_stock
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k
normalize: true
dtype: float16

==================================================================== MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k #Fixing word response (approx. 10.3k word response)

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/MrRoboto-BASE-B-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
    parameters:
      weight: 0.45
      density: 0.9
  - model: MrRobotoAI/MrRoboto-BASE-D-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
    parameters:
      weight: 0.45
      density: 0.9
  - model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
    parameters:
      weight: 0.1
      density: 0.9
merge_method: dare_linear
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
parameters:
 normalize: true
dtype: float16

==================================================================== MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k #Add humanistic aspects (approx. 5.5k word response)

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/L3-Daybreak-8b-lora
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/Llama-3-8B-Abomination-LORA
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/llama3-8b-hikikomori-v0.4
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B 
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/Llama-3-Sunfall-8b-lora
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Azazelle/Nimue-8B
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+Blackroot/Llama3-RP-Lora
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+DreadPoor/abliteration-OVA-8B-r128-LORA
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+fimbulvntr/llewd-8b-64k
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+nothingiisreal/llama3-8B-DWP-lora
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+ResplendentAI/Aura_Llama3
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+ResplendentAI/BlueMoon_Llama3
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+ResplendentAI/Llama3_Aesir_Preview_LoRA_128
  - model: MrRobotoAI/MrRoboto-BASE-E-Unholy-8b-64k+ResplendentAI/Llama3_RP_ORPO_LoRA
merge_method: model_stock
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k
normalize: true
dtype: float16

==================================================================== MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k #Add humanistic aspects (approx. 8.3k word response)

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Baldur-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-BRAG-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Cakrawala-r128-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Control-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Mistral-Data-r128-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Sekhmet-r128-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3.1-Spark-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3-BlueSerp-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3-FantasyWriter-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3-Smaug-r64-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3-Templar-r128-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+kromcomp/L3-Unaligned-r256-LoRA
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+ResplendentAI/Luna_Llama3
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+ResplendentAI/Theory_of_Mind_Llama3
  - model: MrRobotoAI/MrRoboto-BASE-F-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
merge_method: model_stock
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k
normalize: true
dtype: float16

==================================================================== MrRobotoAI/MrRoboto-BASE-H-Unholy-8b-64k #Specify writing style (approx. 1.6k word response)

The following YAML configuration was used to produce this model:

models:
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-formality-informal-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-length-long-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-sarcasm-more-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-type-descriptive-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-type-expository-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-type-narrative-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+kik41/lora-type-persuasive-llama-3-8b-v2
  - model: MrRobotoAI/MrRoboto-BASE-G-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
merge_method: model_stock
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k
normalize: true
dtype: float16

==================================================================== MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k #Fixing word response (approx. 9.2k word response)

The following YAML configuration was used to produce this model:

merge_method: dare_linear
models:
  - model: MrRobotoAI/MrRoboto-BASE-B-Unholy-8b-64k
    parameters:
      weight:
        - filter: v_proj
          value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
        - filter: o_proj
          value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
        - filter: up_proj
          value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
        - filter: gate_proj
          value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
        - filter: down_proj
          value: [0.8, 0.8, 0.75, 0.55, 0.35, 0.15, 0.35, 0.55, 0.75, 0.8, 0.8]
        - value: 1
  - model: MrRobotoAI/MrRoboto-BASE-H-Unholy-8b-64k+Azazelle/Llama-3-LongStory-LORA
    parameters:
      weight:
        - filter: v_proj
          value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
        - filter: o_proj
          value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
        - filter: up_proj
          value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
        - filter: gate_proj
          value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
        - filter: down_proj
          value: [0.2, 0.2, 0.25, 0.45, 0.65, 0.85, 0.65, 0.45, 0.25, 0.2, 0.2]
        - value: 0
base_model: MrRobotoAI/MrRoboto-BASE-Unholy-8b-64k
tokenizer_source: base
dtype: bfloat16
Downloads last month
52
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for MrRobotoAI/MrRoboto-BASE-v1-8b-64k