l2-7b-sayori-ddlc-v0.1:

  • Experimental LLaMA-2 7b chat fine-tuned for Sayori character from DDLC
  • Fine-tuned on a dataset of ~600 items (dialogue scraped from game augmented by MythoMax-l2-13b to turn each into snippets of multi-turn chat dialogue between Player and Sayori)
  • GGMLs, GGUFs
  • QLoras (hf and GGML)

USAGE

This is meant to be mainly a chat model with limited RP ability.

For best results: replace "Human" and "Assistant" with "Player" and "Sayori" like so:

\nPlayer: (prompt)\Sayori:

HYPERPARAMS

  • Trained for 2 epochs
  • rank: 32
  • lora alpha: 64
  • lora dropout: 0.5
  • lr: 2e-4
  • batch size: 2
  • warmup ratio: 0.1
  • grad steps: 4

WARNINGS AND DISCLAIMERS

Note that aside from formatting and other minor edits, generated portion of dataset used is mostly as is generated by LM. As such, while this version is better at coherency or chatting than previous ones, it may not reflect perfectly the characteristics of Sayori (i.e. she may act more like Monika, etc.). Next version will train on a manually curated and edited version of this dataset, where dialogue will be edited to reflect her characteristics more.

Other tests to come (i.e. fine tuning on other base models, like Airoboros or Kimiko-based model).

Finally, this model is not guaranteed to output aligned or safe outputs, use at your own risk.

Downloads last month
28
Safetensors
Model size
6.74B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 922-CA/l2-7b-sayori-ddlc-v0.1

Quantizations
2 models