nicolay-r commited on
Commit
1cec238
·
verified ·
1 Parent(s): 7a7575a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -29,7 +29,50 @@ pipeline_tag: text2text-generation
29
 
30
  ### Direct Use
31
 
32
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ### Downstream Use
35
 
 
29
 
30
  ### Direct Use
31
 
32
+
33
+ Here are the **following two steps for a quick start with model application**:
34
+
35
+
36
+ 1. Loading model and tokenizer:
37
+
38
+ ```python
39
+ import torch
40
+ from transformers import AutoTokenizer, T5ForConditionalGeneration
41
+
42
+ # Setup model path.
43
+ model_path = "nicolay-r/flan-t5-tsa-prompt-base"
44
+ # Setup device.
45
+ device = "cuda:0"
46
+
47
+ model = T5ForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.bfloat16)
48
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
49
+ model.to(device)
50
+ ```
51
+
52
+ 2. Setup ask method for generating LLM responses:
53
+
54
+ ```python
55
+ def ask(prompt):
56
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
57
+ inputs.to(device)
58
+ output = model.generate(**inputs, temperature=1)
59
+ return tokenizer.batch_decode(output, skip_special_tokens=True)[0]
60
+ ```
61
+
62
+ Finally, you can infer model results as follows:
63
+
64
+ ```python
65
+ # Input sentence.
66
+ sentence = "I would support him"
67
+ # Input target.
68
+ target = "him"
69
+ # output response
70
+ flant5_response = ask(f"What's the attitude of the sentence '{context}', to the target '{target}'?")
71
+ print(f"Author opinion towards `{target}` in `{sentence}` is:\n{flant5_response}")
72
+ ```
73
+
74
+ The response of the model is as follows:
75
+ > Author opinion towards "him" in "I would support him despite his bad behavior." is: **positive**
76
 
77
  ### Downstream Use
78