File size: 1,134 Bytes
7f8271d
 
 
 
 
 
e801f77
7f8271d
 
6d4d9a3
e801f77
7f8271d
 
926ad96
e801f77
 
 
18911ba
6d4d9a3
 
e801f77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
inference: false
license: other
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
tags:
- wizardlm
- uncensored
- gptq
- quantization
- auto-gptq
- 7b
- llama
- 4bit
---

# Get Started
This model should use [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) so you need to use `auto-gptq`
- `no-act-order` model
- 4bit model quantization

```py
from transformers import AutoTokenizer, pipeline, AutoModelForCausalLM, LlamaForCausalLM, LlamaTokenizer, StoppingCriteria, PreTrainedTokenizerBase
from auto_gptq import AutoGPTQForCausalLM

model_id = 'seonglae/wizardlm-7b-uncensored-gptq'
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(
        model_id,
        model_basename=model_basename,
        trust_remote_code=True,
        device='cuda:0',
        use_triton=False,
        use_safetensors=True,
)

pipe = pipeline(
      "text-generation",
      model=model,
      tokenizer=tokenizer,
      temperature=0.5,
      top_p=0.95,
      max_new_tokens=100,
      repetition_penalty=1.15,
)
prompt = "USER: Are you AI?\nASSISTANT:"
pipe(prompt)
```