Phoenix21 commited on
Commit
b78f396
·
verified ·
1 Parent(s): 8f71ece

Update app.py

Browse files

Revised Helper Function
Direct Model Invocation:
The LiteLLMModel instance is likely callable. Instead of using a .complete() method, call the model directly with the prompt:
Response Handling:
The returned result might be a string or a dictionary. Check the type of result and extract the text accordingly. If your specific model returns responses in a different format, adjust the helper function to handle that format.
Consistency:
Update all parts of your code where you previously used .complete() to use the new call_llm function or a direct call to llm.

Files changed (1) hide show
  1. app.py +9 -4
app.py CHANGED
@@ -48,10 +48,15 @@ def call_llm(prompt: str) -> str:
48
  """
49
  Helper to call the LLM with a prompt, handling response extraction.
50
  """
51
- result = llm.complete(prompt)
52
- if isinstance(result, dict):
53
- return result.get('text', '')
54
- return result
 
 
 
 
 
55
 
56
  ###############################################################################
57
  # 3) CSV Loading and Processing
 
48
  """
49
  Helper to call the LLM with a prompt, handling response extraction.
50
  """
51
+ try:
52
+ result = llm(prompt) # Directly call the model with the prompt
53
+ # If the result is a dictionary and contains 'text', extract it.
54
+ if isinstance(result, dict) and 'text' in result:
55
+ return result['text']
56
+ return result if isinstance(result, str) else str(result)
57
+ except Exception as e:
58
+ logger.error(f"LLM call error: {e}")
59
+ return ""
60
 
61
  ###############################################################################
62
  # 3) CSV Loading and Processing