chargoddard's picture
Update README.md
c0cd0b1
metadata
datasets:
  - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
  - jondurbin/airoboros-gpt4-1.4.1
  - openai/summarize_from_feedback
  - ehartford/wizard_vicuna_70k_unfiltered
language:
  - en
tags:
  - llama

Trained on a flavorful melange of the WizardLM, Airoboros, and Wizard Vicuna datasets. This model was trained using both linear and NTK-aware RoPE scaling in tandem. When loading, ensure that compress_pos_emb (or scale) is set to 2, and alpha_value is set to 4. Both values must be set.

Expect context length of up to 8192 to work for sure. It will probably maintain coherence into the ~12k range, but I have not tested that.

Prompt format is vicuna 1.1:

<whatever nonsense system prompt you want>
USER: ...
ASSISTANT: ...