ALmonster commited on
Commit
032c691
·
verified ·
1 Parent(s): 33645ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -2
README.md CHANGED
@@ -8,8 +8,50 @@ language:
8
  We fine-tuned our ChemGPT2-QA-72B based on the Qwen2-72B-Instruct model. Our training data, ChemGPT-2.0-Data, has been open-sourced and is available at https://huggingface.co/datasets/ALmonster/ChemGPT-2.0-Data.
9
  We evaluated our model on the three chemistry tasks of C-Eval and compared it with GPT-3.5 and GPT-4. The results are as follows:
10
 
11
- | C-Eval | college_chemistry | high_school_chemistry | middle_school_chemistry | AVG |
 
 
 
12
  |--------|-------------------|-----------------------|-------------------------|-----|
13
  | GPT-3.5 | 0.397 | 0.529 | 0.714 | 0.54666667 |
14
  | GPT4 | 0.594 | 0.558 | 0.811 | 0.65433333 |
15
- | chemgpt| 0.71 | 0.936 | 0.995 | 0.88033333 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  We fine-tuned our ChemGPT2-QA-72B based on the Qwen2-72B-Instruct model. Our training data, ChemGPT-2.0-Data, has been open-sourced and is available at https://huggingface.co/datasets/ALmonster/ChemGPT-2.0-Data.
9
  We evaluated our model on the three chemistry tasks of C-Eval and compared it with GPT-3.5 and GPT-4. The results are as follows:
10
 
11
+
12
+ ## C-Eval
13
+
14
+ | Models | college_chemistry | high_school_chemistry | middle_school_chemistry | AVG |
15
  |--------|-------------------|-----------------------|-------------------------|-----|
16
  | GPT-3.5 | 0.397 | 0.529 | 0.714 | 0.54666667 |
17
  | GPT4 | 0.594 | 0.558 | 0.811 | 0.65433333 |
18
+ | chemgpt| 0.71 | 0.936 | 0.995 | 0.88033333 |
19
+
20
+
21
+ ## Quickstart
22
+
23
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
24
+
25
+ ```python
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+ device = "cuda" # the device to load the model onto
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(
30
+ "ALmonster/ChemGPT2-QA-72B",
31
+ torch_dtype="auto",
32
+ device_map="auto"
33
+ )
34
+ tokenizer = AutoTokenizer.from_pretrained("ALmonster/ChemGPT2-QA-72B")
35
+
36
+ prompt = "Give me a short introduction to large language model."
37
+ messages = [
38
+ {"role": "system", "content": "You are a helpful assistant."},
39
+ {"role": "user", "content": prompt}
40
+ ]
41
+ text = tokenizer.apply_chat_template(
42
+ messages,
43
+ tokenize=False,
44
+ add_generation_prompt=True
45
+ )
46
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
47
+
48
+ generated_ids = model.generate(
49
+ model_inputs.input_ids,
50
+ max_new_tokens=512
51
+ )
52
+ generated_ids = [
53
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
54
+ ]
55
+
56
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
57
+ ```