abacaj commited on
Commit
1802ca0
·
1 Parent(s): 0200589

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -28,6 +28,53 @@ language:
28
  - en
29
  ---
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  Evals:
32
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ceeb27e7f6014c0e9d9268/_i-IkouWb1qMz8c9LXB7M.png)
33
 
 
28
  - en
29
  ---
30
 
31
+ How to run inference:
32
+ ```python
33
+ import transformers
34
+ import torch
35
+
36
+
37
+ def fmt_prompt(prompt: str) -> str:
38
+ return f"""[Instructions]:\n{prompt}\n\n[Response]:"""
39
+
40
+
41
+ if __name__ == "__main__":
42
+ model_name = "abacaj/starcoderbase-1b-sft"
43
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
44
+
45
+ model = (
46
+ transformers.AutoModelForCausalLM.from_pretrained(
47
+ model_name,
48
+ )
49
+ .to("cuda:0")
50
+ .eval()
51
+ )
52
+
53
+ prompt = "Write a python function to sort the following array in ascending order, don't use any built in sorting methods: [9,2,8,1,5]"
54
+ prompt_input = fmt_prompt(prompt)
55
+ inputs = tokenizer(prompt_input, return_tensors="pt").to(model.device)
56
+ input_ids_cutoff = inputs.input_ids.size(dim=1)
57
+
58
+ with torch.no_grad():
59
+ generated_ids = model.generate(
60
+ **inputs,
61
+ use_cache=True,
62
+ max_new_tokens=512,
63
+ temperature=0.2,
64
+ top_p=0.95,
65
+ do_sample=True,
66
+ eos_token_id=tokenizer.eos_token_id,
67
+ pad_token_id=tokenizer.pad_token_id,
68
+ )
69
+
70
+ completion = tokenizer.decode(
71
+ generated_ids[0][input_ids_cutoff:],
72
+ skip_special_tokens=True,
73
+ )
74
+
75
+ print(completion)
76
+ ```
77
+
78
  Evals:
79
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ceeb27e7f6014c0e9d9268/_i-IkouWb1qMz8c9LXB7M.png)
80