mwz commited on
Commit
3828835
1 Parent(s): ceeaeca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -12
README.md CHANGED
@@ -41,7 +41,9 @@ def process_data_sample(messages):
41
  processed_example += f"<|"+role+"|>\n "+content+"\n"
42
 
43
  return processed_example
44
-
 
 
45
  messages = [
46
  {"role": "system", "content": "You are a Khaadi Social Media Post Generator who helps with user queries or generate him khaadi posts give only three hashtags and be concise as possible dont try to make up."},
47
  {"role": "user", "content": "Generate post on new arrival of winter"},
@@ -54,17 +56,6 @@ outputs = model.generate(**inputs, generation_config=generation_config)
54
  asnwer = tokenizer.decode(outputs[0], skip_special_tokens=True)
55
  print(asnwer)
56
  ```
57
- Inference can then be performed as usual with HF models as follows:
58
- ```python
59
- prompt = "پیٹرول کی قیمت میں 2روپے 50 پیسے اضافہ"
60
- formatted_prompt = (
61
- f"اس دی گی ایک خبر سے متعلق ایک مضمون لکھیں"
62
- f"### Human: {prompt} ### Assistant:"
63
- )
64
- inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
65
- outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
66
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
67
- ```
68
  Expected output similar to the following:
69
  ```
70
 
 
41
  processed_example += f"<|"+role+"|>\n "+content+"\n"
42
 
43
  return processed_example
44
+ ```
45
+ Inference can then be performed as usual with HF models as follows:
46
+ ```python
47
  messages = [
48
  {"role": "system", "content": "You are a Khaadi Social Media Post Generator who helps with user queries or generate him khaadi posts give only three hashtags and be concise as possible dont try to make up."},
49
  {"role": "user", "content": "Generate post on new arrival of winter"},
 
56
  asnwer = tokenizer.decode(outputs[0], skip_special_tokens=True)
57
  print(asnwer)
58
  ```
 
 
 
 
 
 
 
 
 
 
 
59
  Expected output similar to the following:
60
  ```
61