Safetensors
English
art
Not-For-All-Audiences
Norquinal commited on
Commit
82c2d2b
·
1 Parent(s): ac32404

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -27
README.md CHANGED
@@ -1,34 +1,27 @@
1
  ---
2
- library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: nf4
16
- - bnb_4bit_use_double_quant: False
17
- - bnb_4bit_compute_dtype: float16
18
 
19
- The following `bitsandbytes` quantization config was used during training:
20
- - quant_method: bitsandbytes
21
- - load_in_8bit: False
22
- - load_in_4bit: True
23
- - llm_int8_threshold: 6.0
24
- - llm_int8_skip_modules: None
25
- - llm_int8_enable_fp32_cpu_offload: False
26
- - llm_int8_has_fp16_weight: False
27
- - bnb_4bit_quant_type: nf4
28
- - bnb_4bit_use_double_quant: False
29
- - bnb_4bit_compute_dtype: float16
30
- ### Framework versions
31
 
32
- - PEFT 0.4.0
33
 
34
- - PEFT 0.4.0
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
+ datasets: Norquinal/OpenCAI
4
+ language: en
5
+ tags:
6
+ - art
7
+ - not-for-all-audiences
8
  ---
9
+ # OpenCAI
10
 
11
+ OpenCAI is a model fine-tuned from [Llama-2-13B](https://huggingface.co/NousResearch/Llama-2-13b-hf) and is an attempted open-source recreation of the style of roleplay found at [C.AI](https://beta.character.ai/). It was trained on [4800 samples](https://huggingface.co/datasets/Norquinal/OpenCAI/tree/main) of Discord roleplay interactions rather than C.AI outputs, as Discord is the likely origin of a majority of the material used to train C.AI's original model.
12
 
13
+ This model is primarily focused on chat and roleplay without any alignment. As such, it may output content that can be considered "unsafe" or "harmful." Please use the responsibly and to your best judgement.
 
 
 
 
 
 
 
 
 
 
14
 
15
+ ## Prompt Format
 
 
 
 
 
 
 
 
 
 
 
16
 
17
+ This model uses the Pygmalion-2/Metharme prompt format. The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
18
 
19
+ The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
20
+
21
+ ### Example Prompt
22
+ ```
23
+ <|system|>{system_prompt}
24
+ Characters:
25
+ [char]: [description]
26
+ Summary: [summary of events]
27
+ ```