wangrongsheng commited on
Commit
dbf357e
1 Parent(s): 9ed120a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -1 +1,28 @@
1
- LLM-Detector: Improving AI-generated Chinese Text Detection with Large Language Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LLM-Detector: Improving AI-generated Chinese Text Detection with Large Language Models
2
+
3
+ ```python
4
+ from transformers import AutoModelForCausalLM, AutoTokenizer
5
+ from transformers.generation import GenerationConfig
6
+
7
+ # Note: The default behavior now has injection attack prevention off.
8
+ tokenizer = AutoTokenizer.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", trust_remote_code=True)
9
+
10
+ # use bf16
11
+ # model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", device_map="auto", trust_remote_code=True, bf16=True).eval()
12
+
13
+ # use fp16
14
+ # model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", device_map="auto", trust_remote_code=True, fp16=True).eval()
15
+
16
+ # use cpu only
17
+ # model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", device_map="cpu", trust_remote_code=True).eval()
18
+
19
+ # use auto mode, automatically select precision based on the device.
20
+ model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", device_map="auto", trust_remote_code=True).eval()
21
+ #model = AutoModelForCausalLM.from_pretrained("QiYuan-tech/LLM-Detector-Small-zh", device_map="auto", trust_remote_code=True).cuda()
22
+
23
+ # Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
24
+ # model.generation_config = GenerationConfig.from_pretrained("./Qwen-1_8B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
25
+
26
+ response, history = model.chat(tokenizer, "你好", history=None)
27
+ print(response)
28
+ ```