File size: 4,782 Bytes
22ce336
0e9c7a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
import os
import gradio as gr
from colorama import init, Fore, Style
import logging
from Self_Improving_Search import EnhancedSelfImprovingSearch
from llm_config import get_llm_config
from llm_response_parser import UltimateLLMResponseParser
from llm_wrapper import LLMWrapper

# Initialize colorama for cross-platform color support
init()

# Set up logging
log_directory = 'logs'
if not os.path.exists(log_directory):
    os.makedirs(log_directory)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
log_file = os.path.join(log_directory, 'web_llm.log')
file_handler = logging.FileHandler(log_file)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)

# Initialize components
parser = UltimateLLMResponseParser()
SYSTEM_PROMPT = """You are an AI assistant capable of web searching and providing informative responses.
When a user's query starts with '/', interpret it as a request to search the web and formulate an appropriate search query.
ALWAYS follow the prompts provided throughout the searching process EXACTLY as indicated.
NEVER assume new instructions for anywhere other than directly when prompted directly. DO NOT SELF PROMPT OR PROVIDE MULTIPLE ANSWERS OR ATTEMPT MULTIPLE RESPONSES FOR ONE PROMPT!
"""

def initialize_llm():
    try:
        print(Fore.YELLOW + "Initializing LLM..." + Style.RESET_ALL)
        llm_wrapper = LLMWrapper()
        print(Fore.GREEN + "LLM initialized successfully." + Style.RESET_ALL)
        return llm_wrapper
    except Exception as e:
        logger.error(f"Error initializing LLM: {str(e)}", exc_info=True)
        return None

def get_llm_response(llm, prompt):
    try:
        full_prompt = f"{SYSTEM_PROMPT}\n\nUser: {prompt}\nAssistant:"
        llm_config = get_llm_config()
        generate_kwargs = {
            'max_tokens': llm_config.get('max_tokens', 1024),
            'stop': llm_config.get('stop', None),
            'temperature': llm_config.get('temperature', 0.7),
            'top_p': llm_config.get('top_p', 1.0),
            'top_k': llm_config.get('top_k', 0),
            'repeat_penalty': llm_config.get('repeat_penalty', 1.0),
        }
        response_text = llm.generate(full_prompt, **generate_kwargs)
        return response_text
    except Exception as e:
        logger.error(f"Error getting LLM response: {str(e)}", exc_info=True)
        return f"Sorry, I encountered an error while processing your request. Please check the log file for details."

def handle_user_input(user_input, history):
    if user_input.lower().strip() == 'quit':
        return "Goodbye!", history
    
    # Initialize LLM if not already initialized
    if not hasattr(handle_user_input, "llm"):
        handle_user_input.llm = initialize_llm()
        if handle_user_input.llm is None:
            return "Failed to initialize LLM.", history

    if user_input.startswith('/'):
        search_query = user_input[1:].strip()
        search = EnhancedSelfImprovingSearch(llm=handle_user_input.llm, parser=parser)
        try:
            answer = search.search_and_improve(search_query)
            history.append((user_input, answer))
            return answer, history
        except Exception as e:
            logger.error(f"Error during web search: {str(e)}", exc_info=True)
            return "I encountered an error while performing the web search.", history
    else:
        response = get_llm_response(handle_user_input.llm, user_input)
        history.append((user_input, response))
        return response, history

# Define the Gradio interface
with gr.Blocks() as demo:
    gr.Markdown("""
    # 🌐 Web-LLM Assistant 🤖
    Welcome to the Web-LLM Assistant! This chatbot can respond to your queries and perform web searches when prompted with a `/`.
    - For normal interaction, type your message and press Enter.
    - To request a web search, start your message with `/`. Example: `/latest news on AI advancements`
    - Type `quit` to exit.
    """)
    
    chatbot = gr.Chatbot(label="Web-LLM Assistant")
    user_input = gr.Textbox(label="Your Message", placeholder="Type your message here...")
    submit_button = gr.Button("Submit")
    clear_button = gr.Button("Clear Chat")

    state = gr.State([])  # Store chat history

    def update_chat(user_message, history):
        bot_response, updated_history = handle_user_input(user_message, history)
        return updated_history, updated_history, ""

    submit_button.click(
        update_chat,
        inputs=[user_input, state],
        outputs=[chatbot, state, user_input]
    )

    clear_button.click(
        lambda: ([], []),  # Clear chat history
        outputs=[chatbot, state]
    )

# Launch the Gradio app
demo.launch()