msaad02 commited on
Commit
65e970b
·
verified ·
1 Parent(s): 34f65dd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - msaad02/brockport-gpt-4-qa
4
+ language:
5
+ - en
6
+ ---
7
+ # BrockportGPT 7B
8
+
9
+ This is a finetuned LLaMA-2 model using a synthetically generated dataset from SUNY Brockports website using GPT-4.
10
+ The goal here is to answer any question related to SUNY Brockport. This is accomplished to varying degrees of success.
11
+
12
+ For more information see https://www.matthewsaad.com/projects/brockportgpt/
13
+
14
+ Use this code for inference using GPTQ:
15
+
16
+ ```python
17
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
18
+ import torch
19
+
20
+ prompt = lambda question: f"""\
21
+ <s>[INST] <<SYS>>
22
+ You are a helpful, respectful and honest assistant for SUNY Brockport, a public college in Brockport, New York. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
23
+
24
+ If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
25
+ <</SYS>>
26
+
27
+ {question} [/INST]
28
+ """
29
+
30
+ REPO_ID = "msaad02/BrockportGPT-7b"
31
+
32
+ generator = pipeline(
33
+ task='text-generation',
34
+ tokenizer=AutoTokenizer.from_pretrained(REPO_ID),
35
+ model=AutoModelForCausalLM.from_pretrained(
36
+ pretrained_model_name_or_path=REPO_ID,
37
+ quantization_config=BitsAndBytesConfig(
38
+ load_in_4bit=True,
39
+ bnb_4bit_compute_dtype=torch.bfloat16,
40
+ bnb_4bit_use_double_quant=True,
41
+ bnb_4bit_quant_type='nf4'
42
+ ),
43
+ ),
44
+ torch_dtype=torch.bfloat16,
45
+ device_map={"": 0},
46
+ )
47
+ ```