Spaces:

arjunanand13 commited on
Commit
b797353
1 Parent(s): 147a3c5

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +18 -5
app.py CHANGED
@@ -93,14 +93,27 @@ class DocumentRetrievalAndGeneration:
93
  content += "-" * 50 + "\n"
94
  content += self.all_splits[idx].page_content + "\n"
95
 
96
- prompt = f"""
97
- <s>
 
 
 
 
 
98
  Here's my question:
99
- Query: {query}
100
- Solution:
101
- RETURN ONLY SOLUTION. IF THERE IS NO ANSWER RELATABLE IN RETRIEVED CHUNKS, RETURN "NO SOLUTION AVAILABLE"
 
 
 
 
 
 
 
102
  </s>
103
  """
 
104
  messages = [{"role": "user", "content": prompt}]
105
  encodeds = self.llm.tokenizer.apply_chat_template(messages, return_tensors="pt")
106
  model_inputs = encodeds.to(self.llm.model.device)
 
93
  content += "-" * 50 + "\n"
94
  content += self.all_splits[idx].page_content + "\n"
95
 
96
+ prompt = f"""<s>
97
+ You are a knowledgeable assistant with access to a comprehensive database.
98
+ I need you to answer my question and provide related information in a specific format.
99
+ I have provided five relatable json files {content}, choose the most suitable chunks for answering the query
100
+ Here's what I need:
101
+ Include a final answer without additional comments, sign-offs, or extra phrases. Be direct and to the point.
102
+ content
103
  Here's my question:
104
+ Query:{query}
105
+ Solution==>
106
+ RETURN ONLY SOLUTION . IF THEIR IS NO ANSWER RELATABLE IN RETRIEVED CHUNKS , RETURN " NO SOLUTION AVAILABLE"
107
+ Example1
108
+ Query: "How to use IPU1_0 instead of A15_0 to process NDK in TDA2x-EVM",
109
+ Solution: "To use IPU1_0 instead of A15_0 to process NDK in TDA2x-EVM, you need to modify the configuration file of the NDK application. Specifically, change the processor reference from 'A15_0' to 'IPU1_0'.",
110
+
111
+ Example2
112
+ Query: "Can BQ25896 support I2C interface?",
113
+ Solution: "Yes, the BQ25896 charger supports the I2C interface for communication."
114
  </s>
115
  """
116
+
117
  messages = [{"role": "user", "content": prompt}]
118
  encodeds = self.llm.tokenizer.apply_chat_template(messages, return_tensors="pt")
119
  model_inputs = encodeds.to(self.llm.model.device)