doberst commited on
Commit
aacc98d
1 Parent(s): ea69a16

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -3
README.md CHANGED
@@ -1,3 +1,87 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ inference: false
4
+ ---
5
+
6
+ # SLIM-Q-GEN-PHI-3
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ **slim-q-gen-phi-3** implements a specialized function-calling question generation from a context passage, with output in the form of a python dictionary, e.g.,
11
+
12
+ &nbsp;&nbsp;&nbsp;&nbsp;`{'question': ['What were earnings per share in the most recent quarter?'] }
13
+
14
+ This model is finetuned on top of phi-3-mini-4k-instruct base.
15
+
16
+ For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-q-gen-phi-3-tool'**](https://huggingface.co/llmware/slim-q-gen-phi-3-tool).
17
+
18
+
19
+ ## Prompt format:
20
+
21
+ `function = "generate"`
22
+ `params = "{'question', 'boolean', or 'multiple choice'}"`
23
+ `prompt = "<human> " + {text} + "\n" + `
24
+ &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
25
+
26
+
27
+ <details>
28
+ <summary>Transformers Script </summary>
29
+
30
+ model = AutoModelForCausalLM.from_pretrained("llmware/slim-q-gen-phi-3")
31
+ tokenizer = AutoTokenizer.from_pretrained("llmware/slim-q-gen-phi-3")
32
+
33
+ function = "generate"
34
+ params = "boolean"
35
+
36
+ text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
37
+
38
+ prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
39
+
40
+ inputs = tokenizer(prompt, return_tensors="pt")
41
+ start_of_input = len(inputs.input_ids[0])
42
+
43
+ outputs = model.generate(
44
+ inputs.input_ids.to('cpu'),
45
+ eos_token_id=tokenizer.eos_token_id,
46
+ pad_token_id=tokenizer.eos_token_id,
47
+ do_sample=True,
48
+ temperature=0.7,
49
+ max_new_tokens=200
50
+ )
51
+
52
+ output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
53
+
54
+ print("output only: ", output_only)
55
+
56
+ [OUTPUT]: {'llm_response': {'question': ['Did Telsa stock decline more than 8% yesterday?']} }
57
+
58
+ # here's the fun part
59
+ try:
60
+ output_only = ast.literal_eval(llm_string_output)
61
+ print("success - converted to python dictionary automatically")
62
+ except:
63
+ print("fail - could not convert to python dictionary automatically - ", llm_string_output)
64
+
65
+ </details>
66
+
67
+ <details>
68
+
69
+
70
+
71
+
72
+ <summary>Using as Function Call in LLMWare</summary>
73
+
74
+ from llmware.models import ModelCatalog
75
+ slim_model = ModelCatalog().load_model("llmware/slim-q-gen-phi-3", sample=True, temperature=0.7)
76
+ response = slim_model.function_call(text,params=["boolean"], function="generate")
77
+
78
+ print("llmware - llm_response: ", response)
79
+
80
+ </details>
81
+
82
+
83
+ ## Model Card Contact
84
+
85
+ Darren Oberst & llmware team
86
+
87
+ [Join us on Discord](https://discord.gg/MhZn5Nc39h)