PyTorch
English
llama
Not-For-All-Audiences
nsfw

Roleplay Quantization in EXL2 format for Lumimaid 0.2 70B

Quantized using the cleaned PIPPA roleplay dataset.

All tests performed on a headless Linux instance with no active desktop environment to maximize VRAM. Software used was TabbyAPI with Q4 cache enabled.

Other quants available on request, feel free to ask!

See original model for further details.

Original Model card

Lumimaid 0.2

Image
8b - 12b - [70b] - 123b

This model is based on: Meta-Llama-3.1-70B-Instruct

Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-70B?nw=nwuserundis95

Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.

As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.

Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!

Prompt template: Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Credits:

  • Undi
  • IkariDev

Training data we used to make our dataset:

We sadly didn't find the sources of the following, DM us if you recognize your set !

  • Opus_Instruct-v2-6.5K-Filtered-v2-sharegpt
  • claude_sharegpt_trimmed
  • CapybaraPure_Decontaminated-ShareGPT_reduced

Datasets credits:

  • Epiculous
  • ChaoticNeutrals
  • Gryphe
  • meseca
  • PJMixers
  • NobodyExistsOnTheInternet
  • cgato
  • kalomaze
  • Doctor-Shotgun
  • Norquinal
  • nothingiisreal

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
33
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train luigi86/Lumimaid-v0.2-70B_exl2-rpcal