Pijush2023 commited on
Commit
ac24881
·
verified ·
1 Parent(s): dd4345a

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1642 -64
app.py CHANGED
@@ -1,3 +1,1601 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  from gradio_client import Client
2
  import gradio as gr
3
  import requests
@@ -102,15 +1700,7 @@ def initialize_phi_model():
102
  torch_dtype="auto",
103
  trust_remote_code=True,
104
  )
105
- # model = AutoModelForCausalLM.from_pretrained(
106
- # "microsoft/Phi-3.5-MoE-instruct",
107
- # device_map="cuda",
108
- # torch_dtype="auto",
109
- # trust_remote_code=True,
110
- # )
111
  tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
112
- # tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-MoE-instruct")
113
-
114
  return pipeline("text-generation", model=model, tokenizer=tokenizer)
115
 
116
  def initialize_gpt_model():
@@ -133,14 +1723,13 @@ gpt4o_mini_model = initialize_gpt4o_mini_model()
133
 
134
 
135
  # Existing embeddings and vector store for GPT-4o
136
- # gpt_embeddings = OpenAIEmbeddings(api_key=os.environ['OPENAI_API_KEY'])
137
- gpt_embeddings = embeddings
138
- gpt_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=gpt_embeddings)
139
  gpt_retriever = gpt_vectorstore.as_retriever(search_kwargs={'k': 5})
140
 
141
  # New vector store setup for Phi-3.5
142
  phi_embeddings = embeddings
143
- phi_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=phi_embeddings)
144
  phi_retriever = phi_vectorstore.as_retriever(search_kwargs={'k': 5})
145
 
146
 
@@ -152,7 +1741,6 @@ from pinecone import Pinecone
152
  pc = Pinecone(api_key=os.environ['PINECONE_API_KEY'])
153
 
154
  index_name = "italyv109102024"
155
-
156
  vectorstore = PineconeVectorStore(index_name=index_name, embedding=embeddings)
157
  retriever = vectorstore.as_retriever(search_kwargs={'k': 5})
158
 
@@ -199,13 +1787,9 @@ It was my pleasure!
199
  Question: {{question}}
200
  Helpful Answer:"""
201
 
202
- template2 =f"""Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
203
-
204
- - Nome del documento: (il nome del documento)
205
- - Numero di pagina: (numero di pagina)
206
- - Contenuto effettivo: (contenuto pertinente del documento)
207
-
208
- Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
209
  {{context}}
210
  Question: {{question}}
211
  Helpful Answer:"""
@@ -367,41 +1951,19 @@ QA_CHAIN_PROMPT_2 = PromptTemplate(input_variables=["context", "question"], temp
367
  # Sure! Here's the information:
368
  # """
369
 
370
- # phi_custom_template = """
371
- # <|system|>
372
- # Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
373
-
374
- # - Nome del documento: (il nome del documento)
375
- # - Numero di pagina: (numero di pagina)
376
- # - Contenuto effettivo: (contenuto pertinente del documento)
377
-
378
- # Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
379
- # <|end|>
380
-
381
- # <|user|>
382
- # {context}
383
- # Question: {question}<|end|>
384
-
385
- # <|assistant|>
386
- # Sure! The Responses are as follows:
387
- # """
388
-
389
- # Modify the Phi-3.5 template to include the selected file
390
  phi_custom_template = """
391
  <|system|>
392
- Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
393
-
394
- - Nome del documento: {document_name}
395
  - Numero di pagina: (numero di pagina)
396
- - Contenuto effettivo: (contenuto pertinente del documento)
397
-
398
- Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
 
399
  <|end|>
400
-
401
  <|user|>
402
  {context}
403
  Question: {question}<|end|>
404
-
405
  <|assistant|>
406
  Sure! The Responses are as follows:
407
  """
@@ -674,6 +2236,8 @@ def generate_answer(message, choice, retrieval_mode, selected_model):
674
  logging.debug(f"LM-3 response: {response}")
675
  return response['result'], extract_addresses(response['result'])
676
 
 
 
677
  elif selected_model == phi_pipe:
678
  logging.debug("Selected model: LM-2")
679
  retriever = phi_retriever
@@ -688,8 +2252,8 @@ def generate_answer(message, choice, retrieval_mode, selected_model):
688
  response = selected_model(prompt, **{
689
  "max_new_tokens": 250,
690
  "return_full_text": True,
691
- "temperature": 0.0,
692
- "do_sample": False,
693
  })
694
 
695
  if response:
@@ -715,6 +2279,34 @@ def generate_answer(message, choice, retrieval_mode, selected_model):
715
  logging.error(traceback.format_exc())
716
  return "Sorry, I encountered an error while processing your request.", []
717
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
718
 
719
 
720
 
@@ -1470,17 +3062,6 @@ def fetch_google_flights(departure_id="JFK", arrival_id="BHM", outbound_date=cur
1470
  # # Function to insert the prompt into the textbox when clicked
1471
  # def insert_prompt(current_text, prompt):
1472
  # return prompt[0] if prompt else current_text
1473
- # List of files for dropdown
1474
-
1475
- files_list = [
1476
- "176769_000219051794_V2.pdf",
1477
- "177115_000230633979_V2.pdf",
1478
- "177273_000219963851_V2.pdf",
1479
- "177429_000219726050_V2.pdf",
1480
- "177495_000219962533_V2.pdf",
1481
- "178041_000197000852_V2.pdf"
1482
- ]
1483
-
1484
 
1485
 
1486
 
@@ -1494,9 +3075,6 @@ with gr.Blocks(theme='gradio/soft') as demo:
1494
  with gr.Column():
1495
  state = gr.State()
1496
 
1497
- # File dropdown
1498
- # file_dropdown = gr.Dropdown(label="Select Document", choices=files_list, value=None)
1499
-
1500
  chatbot = gr.Chatbot([], elem_id="RADAR:Channel 94.1", bubble_full_width=False)
1501
  choice = gr.Radio(label="Select Style", choices=["Details", "Conversational"], value="Conversational",interactive=False,visible=False)
1502
  retrieval_mode = gr.Radio(label="Retrieval Mode", choices=["VDB", "KGF"], value="VDB",interactive=False,visible=False)
 
1
+ # from gradio_client import Client
2
+ # import gradio as gr
3
+ # import requests
4
+ # import os
5
+ # import time
6
+ # import re
7
+ # import logging
8
+ # import tempfile
9
+ # import folium
10
+ # import concurrent.futures
11
+ # import torch
12
+ # from PIL import Image
13
+ # from datetime import datetime
14
+ # from transformers import pipeline, AutoModelForSpeechSeq2Seq, AutoProcessor
15
+ # from googlemaps import Client as GoogleMapsClient
16
+ # from gtts import gTTS
17
+ # from diffusers import StableDiffusionPipeline
18
+ # from langchain_openai import OpenAIEmbeddings, ChatOpenAI
19
+ # from langchain_pinecone import PineconeVectorStore
20
+ # from langchain.prompts import PromptTemplate
21
+ # from langchain.chains import RetrievalQA
22
+ # from langchain.chains.conversation.memory import ConversationBufferWindowMemory
23
+ # from huggingface_hub import login
24
+ # from transformers.models.speecht5.number_normalizer import EnglishNumberNormalizer
25
+ # from parler_tts import ParlerTTSForConditionalGeneration
26
+ # from transformers import AutoTokenizer, AutoFeatureExtractor, set_seed
27
+ # from scipy.io.wavfile import write as write_wav
28
+ # from pydub import AudioSegment
29
+ # from string import punctuation
30
+ # import librosa
31
+ # from pathlib import Path
32
+ # import torchaudio
33
+ # import numpy as np
34
+ # from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
35
+ # from langchain_huggingface import HuggingFaceEmbeddings
36
+
37
+
38
+ # # Neo4j imports
39
+ # from langchain.chains import GraphCypherQAChain
40
+ # from langchain_community.graphs import Neo4jGraph
41
+ # from langchain_community.document_loaders import HuggingFaceDatasetLoader
42
+ # from langchain_text_splitters import CharacterTextSplitter
43
+ # from langchain_experimental.graph_transformers import LLMGraphTransformer
44
+ # from langchain_core.prompts import ChatPromptTemplate
45
+ # from langchain_core.pydantic_v1 import BaseModel, Field
46
+ # from langchain_core.messages import AIMessage, HumanMessage
47
+ # from langchain_core.output_parsers import StrOutputParser
48
+ # from langchain_core.runnables import RunnableBranch, RunnableLambda, RunnableParallel, RunnablePassthrough
49
+ # from serpapi.google_search import GoogleSearch
50
+
51
+ # #Parler TTS v1 Modules
52
+
53
+ # import os
54
+ # import re
55
+ # import tempfile
56
+ # import soundfile as sf
57
+ # from string import punctuation
58
+ # from pydub import AudioSegment
59
+ # from transformers import AutoTokenizer, AutoFeatureExtractor
60
+
61
+
62
+
63
+ # #API AutoDate Fix Up
64
+ # def get_current_date1():
65
+ # return datetime.now().strftime("%Y-%m-%d")
66
+
67
+ # # Usage
68
+ # current_date1 = get_current_date1()
69
+
70
+
71
+
72
+ # # Set environment variables for CUDA
73
+ # os.environ['PYTORCH_USE_CUDA_DSA'] = '1'
74
+ # os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
75
+ # os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'expandable_segments:True'
76
+
77
+
78
+ # hf_token = os.getenv("HF_TOKEN")
79
+ # if hf_token is None:
80
+ # print("Please set your Hugging Face token in the environment variables.")
81
+ # else:
82
+ # login(token=hf_token)
83
+
84
+ # logging.basicConfig(level=logging.DEBUG)
85
+
86
+
87
+ # from langchain.embeddings.huggingface import HuggingFaceEmbeddings
88
+
89
+ # # Initialize HuggingFaceEmbeddings properly
90
+ # embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
91
+
92
+
93
+
94
+
95
+ # #Initialization
96
+
97
+ # # Initialize the models
98
+ # def initialize_phi_model():
99
+ # model = AutoModelForCausalLM.from_pretrained(
100
+ # "microsoft/Phi-3.5-mini-instruct",
101
+ # device_map="cuda",
102
+ # torch_dtype="auto",
103
+ # trust_remote_code=True,
104
+ # )
105
+ # # model = AutoModelForCausalLM.from_pretrained(
106
+ # # "microsoft/Phi-3.5-MoE-instruct",
107
+ # # device_map="cuda",
108
+ # # torch_dtype="auto",
109
+ # # trust_remote_code=True,
110
+ # # )
111
+ # tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
112
+ # # tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-MoE-instruct")
113
+
114
+ # return pipeline("text-generation", model=model, tokenizer=tokenizer)
115
+
116
+ # def initialize_gpt_model():
117
+ # return ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0, model='gpt-4o')
118
+
119
+ # def initialize_gpt4o_mini_model():
120
+ # return ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0, model='gpt-4o-mini')
121
+
122
+
123
+
124
+
125
+
126
+
127
+ # # Initialize all models
128
+ # phi_pipe = initialize_phi_model()
129
+ # gpt_model = initialize_gpt_model()
130
+ # gpt4o_mini_model = initialize_gpt4o_mini_model()
131
+
132
+
133
+
134
+
135
+ # # Existing embeddings and vector store for GPT-4o
136
+ # # gpt_embeddings = OpenAIEmbeddings(api_key=os.environ['OPENAI_API_KEY'])
137
+ # gpt_embeddings = embeddings
138
+ # gpt_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=gpt_embeddings)
139
+ # gpt_retriever = gpt_vectorstore.as_retriever(search_kwargs={'k': 5})
140
+
141
+ # # New vector store setup for Phi-3.5
142
+ # phi_embeddings = embeddings
143
+ # phi_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=phi_embeddings)
144
+ # phi_retriever = phi_vectorstore.as_retriever(search_kwargs={'k': 5})
145
+
146
+
147
+
148
+
149
+
150
+ # # Pinecone setup
151
+ # from pinecone import Pinecone
152
+ # pc = Pinecone(api_key=os.environ['PINECONE_API_KEY'])
153
+
154
+ # index_name = "italyv109102024"
155
+
156
+ # vectorstore = PineconeVectorStore(index_name=index_name, embedding=embeddings)
157
+ # retriever = vectorstore.as_retriever(search_kwargs={'k': 5})
158
+
159
+ # chat_model = ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0, model='gpt-4o')
160
+ # chat_model1 = ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0, model='gpt-4o-mini')
161
+ # conversational_memory = ConversationBufferWindowMemory(
162
+ # memory_key='chat_history',
163
+ # k=10,
164
+ # return_messages=True
165
+ # )
166
+
167
+ # # Prompt templates
168
+ # def get_current_date():
169
+ # return datetime.now().strftime("%B %d, %Y")
170
+
171
+ # current_date = get_current_date()
172
+
173
+ # template1 = f"""As an expert concierge in Birmingham, Alabama, known for being a helpful and renowned guide, I am here to assist you on this sunny bright day of {current_date}. Given the current weather conditions and date, I have access to a plethora of information regarding events, places,sports and activities in Birmingham that can enhance your experience.
174
+ # If you have any questions or need recommendations, feel free to ask. I have a wealth of knowledge of perennial events in Birmingham and can provide detailed information to ensure you make the most of your time here. Remember, I am here to assist you in any way possible.
175
+ # Now, let me guide you through some of the exciting events happening today in Birmingham, Alabama:
176
+ # Address: >>, Birmingham, AL
177
+ # Time: >>__
178
+ # Date: >>__
179
+ # Description: >>__
180
+ # Address: >>, Birmingham, AL
181
+ # Time: >>__
182
+ # Date: >>__
183
+ # Description: >>__
184
+ # Address: >>, Birmingham, AL
185
+ # Time: >>__
186
+ # Date: >>__
187
+ # Description: >>__
188
+ # Address: >>, Birmingham, AL
189
+ # Time: >>__
190
+ # Date: >>__
191
+ # Description: >>__
192
+ # Address: >>, Birmingham, AL
193
+ # Time: >>__
194
+ # Date: >>__
195
+ # Description: >>__
196
+ # If you have any specific preferences or questions about these events or any other inquiries, please feel free to ask. Remember, I am here to ensure you have a memorable and enjoyable experience in Birmingham, AL.
197
+ # It was my pleasure!
198
+ # {{context}}
199
+ # Question: {{question}}
200
+ # Helpful Answer:"""
201
+
202
+ # template2 =f"""Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
203
+
204
+ # - Nome del documento: (il nome del documento)
205
+ # - Numero di pagina: (numero di pagina)
206
+ # - Contenuto effettivo: (contenuto pertinente del documento)
207
+
208
+ # Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
209
+ # {{context}}
210
+ # Question: {{question}}
211
+ # Helpful Answer:"""
212
+
213
+
214
+ # QA_CHAIN_PROMPT_1 = PromptTemplate(input_variables=["context", "question"], template=template1)
215
+ # QA_CHAIN_PROMPT_2 = PromptTemplate(input_variables=["context", "question"], template=template2)
216
+
217
+ # # Neo4j setup
218
+ # # graph = Neo4jGraph(url="neo4j+s://6457770f.databases.neo4j.io",
219
+ # # username="neo4j",
220
+ # # password="Z10duoPkKCtENuOukw3eIlvl0xJWKtrVSr-_hGX1LQ4"
221
+ # # )
222
+ # # Avoid pushing the graph documents to Neo4j every time
223
+ # # Only push the documents once and comment the code below after the initial push
224
+ # # dataset_name = "Pijush2023/birmindata07312024"
225
+ # # page_content_column = 'events_description'
226
+ # # loader = HuggingFaceDatasetLoader(dataset_name, page_content_column)
227
+ # # data = loader.load()
228
+
229
+ # # text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=50)
230
+ # # documents = text_splitter.split_documents(data)
231
+
232
+ # # llm_transformer = LLMGraphTransformer(llm=chat_model)
233
+ # # graph_documents = llm_transformer.convert_to_graph_documents(documents)
234
+ # # graph.add_graph_documents(graph_documents, baseEntityLabel=True, include_source=True)
235
+
236
+
237
+ # #-------------------------------Comment Out------------------------------------------------------------------------------------------------------------------------
238
+
239
+ # # class Entities(BaseModel):
240
+ # # names: list[str] = Field(..., description="All the person, organization, or business entities that appear in the text")
241
+
242
+ # # entity_prompt = ChatPromptTemplate.from_messages([
243
+ # # ("system", "You are extracting organization and person entities from the text."),
244
+ # # ("human", "Use the given format to extract information from the following input: {question}"),
245
+ # # ])
246
+
247
+ # # entity_chain = entity_prompt | chat_model.with_structured_output(Entities)
248
+
249
+ # # def remove_lucene_chars(input: str) -> str:
250
+ # # return input.translate(str.maketrans({"\\": r"\\", "+": r"\+", "-": r"\-", "&": r"\&", "|": r"\|", "!": r"\!",
251
+ # # "(": r"\(", ")": r"\)", "{": r"\{", "}": r"\}", "[": r"\[", "]": r"\]",
252
+ # # "^": r"\^", "~": r"\~", "*": r"\*", "?": r"\?", ":": r"\:", '"': r'\"',
253
+ # # ";": r"\;", " ": r"\ "}))
254
+
255
+ # # def generate_full_text_query(input: str) -> str:
256
+ # # full_text_query = ""
257
+ # # words = [el for el in remove_lucene_chars(input).split() if el]
258
+ # # for word in words[:-1]:
259
+ # # full_text_query += f" {word}~2 AND"
260
+ # # full_text_query += f" {words[-1]}~2"
261
+ # # return full_text_query.strip()
262
+
263
+ # # def structured_retriever(question: str) -> str:
264
+ # # result = ""
265
+ # # entities = entity_chain.invoke({"question": question})
266
+ # # for entity in entities.names:
267
+ # # response = graph.query(
268
+ # # """CALL db.index.fulltext.queryNodes('entity', $query, {limit:2})
269
+ # # YIELD node,score
270
+ # # CALL {
271
+ # # WITH node
272
+ # # MATCH (node)-[r:!MENTIONS]->(neighbor)
273
+ # # RETURN node.id + ' - ' + type(r) + ' -> ' + neighbor.id AS output
274
+ # # UNION ALL
275
+ # # WITH node
276
+ # # MATCH (node)<-[r:!MENTIONS]-(neighbor)
277
+ # # RETURN neighbor.id + ' - ' + type(r) + ' -> ' + node.id AS output
278
+ # # }
279
+ # # RETURN output LIMIT 50
280
+ # # """,
281
+ # # {"query": generate_full_text_query(entity)},
282
+ # # )
283
+ # # result += "\n".join([el['output'] for el in response])
284
+ # # return result
285
+
286
+ # # def retriever_neo4j(question: str):
287
+ # # structured_data = structured_retriever(question)
288
+ # # logging.debug(f"Structured data: {structured_data}")
289
+ # # return structured_data
290
+
291
+ # # _template = """Given the following conversation and a follow-up question, rephrase the follow-up question to be a standalone question,
292
+ # # in its original language.
293
+ # # Chat History:
294
+ # # {chat_history}
295
+ # # Follow Up Input: {question}
296
+ # # Standalone question:"""
297
+
298
+ # # CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
299
+
300
+ # # def _format_chat_history(chat_history: list[tuple[str, str]]) -> list:
301
+ # # buffer = []
302
+ # # for human, ai in chat_history:
303
+ # # buffer.append(HumanMessage(content=human))
304
+ # # buffer.append(AIMessage(content=ai))
305
+ # # return buffer
306
+
307
+ # # _search_query = RunnableBranch(
308
+ # # (
309
+ # # RunnableLambda(lambda x: bool(x.get("chat_history"))).with_config(
310
+ # # run_name="HasChatHistoryCheck"
311
+ # # ),
312
+ # # RunnablePassthrough.assign(
313
+ # # chat_history=lambda x: _format_chat_history(x["chat_history"])
314
+ # # )
315
+ # # | CONDENSE_QUESTION_PROMPT
316
+ # # | ChatOpenAI(temperature=0, api_key=os.environ['OPENAI_API_KEY'])
317
+ # # | StrOutputParser(),
318
+ # # ),
319
+ # # RunnableLambda(lambda x : x["question"]),
320
+ # # )
321
+
322
+ # # # template = """Answer the question based only on the following context:
323
+ # # # {context}
324
+ # # # Question: {question}
325
+ # # # Use natural language and be concise.
326
+ # # # Answer:"""
327
+
328
+ # # template = f"""As an expert concierge known for being helpful and a renowned guide for Birmingham, Alabama, I assist visitors in discovering the best that the city has to offer.I also assist the visitors about various sports and activities. Given today's sunny and bright weather on {current_date}, I am well-equipped to provide valuable insights and recommendations without revealing specific locations. I draw upon my extensive knowledge of the area, including perennial events and historical context.
329
+ # # In light of this, how can I assist you today? Feel free to ask any questions or seek recommendations for your day in Birmingham. If there's anything specific you'd like to know or experience, please share, and I'll be glad to help. Remember, keep the question concise for a quick,short ,crisp and accurate response.
330
+ # # "It was my pleasure!"
331
+ # # {{context}}
332
+ # # Question: {{question}}
333
+ # # Helpful Answer:"""
334
+
335
+ # # qa_prompt = ChatPromptTemplate.from_template(template)
336
+
337
+ # # chain_neo4j = (
338
+ # # RunnableParallel(
339
+ # # {
340
+ # # "context": _search_query | retriever_neo4j,
341
+ # # "question": RunnablePassthrough(),
342
+ # # }
343
+ # # )
344
+ # # | qa_prompt
345
+ # # | chat_model
346
+ # # | StrOutputParser()
347
+ # # )
348
+
349
+
350
+
351
+
352
+
353
+
354
+
355
+ # # phi_custom_template = """
356
+ # # <|system|>
357
+ # # Sei un esperto della lingua italiana e un madrelingua italiano. Il tuo compito è fornire risposte concise, dirette e brevi basate sul documento fornito. Non dovresti dare risposte personali o interpretative.
358
+ # # Fornisci dettagli sul documento che sto per condividere, come il nome del documento, il numero di pagina e altre informazioni specifiche in modo molto breve e diretto. Se non riesci a trovare la risposta, rispondi semplicemente con "Questa domanda è al di là delle mie conoscenze".
359
+ # # Ecco i dettagli del documento da considerare:
360
+ # # - Nome del documento:
361
+ # # - Pagina:
362
+ # # - Altre informazioni richieste:.<|end|>
363
+ # # <|user|>
364
+ # # {context}
365
+ # # Question: {question}<|end|>
366
+ # # <|assistant|>
367
+ # # Sure! Here's the information:
368
+ # # """
369
+
370
+ # # phi_custom_template = """
371
+ # # <|system|>
372
+ # # Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
373
+
374
+ # # - Nome del documento: (il nome del documento)
375
+ # # - Numero di pagina: (numero di pagina)
376
+ # # - Contenuto effettivo: (contenuto pertinente del documento)
377
+
378
+ # # Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
379
+ # # <|end|>
380
+
381
+ # # <|user|>
382
+ # # {context}
383
+ # # Question: {question}<|end|>
384
+
385
+ # # <|assistant|>
386
+ # # Sure! The Responses are as follows:
387
+ # # """
388
+
389
+ # # Modify the Phi-3.5 template to include the selected file
390
+ # phi_custom_template = """
391
+ # <|system|>
392
+ # Sei un esperto di madrelingua italiana. Il tuo compito è fornire risposte precise, accurate, concise, nitide e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
393
+
394
+ # - Nome del documento: {document_name}
395
+ # - Numero di pagina: (numero di pagina)
396
+ # - Contenuto effettivo: (contenuto pertinente del documento)
397
+
398
+ # Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda va oltre la mia conoscenza".
399
+ # <|end|>
400
+
401
+ # <|user|>
402
+ # {context}
403
+ # Question: {question}<|end|>
404
+
405
+ # <|assistant|>
406
+ # Sure! The Responses are as follows:
407
+ # """
408
+
409
+
410
+ # def generate_bot_response(history, choice, retrieval_mode, model_choice):
411
+ # if not history:
412
+ # return
413
+
414
+ # # Select the model
415
+ # # selected_model = chat_model if model_choice == "LM-1" else phi_pipe
416
+ # selected_model = chat_model if model_choice == "LM-1" else (chat_model1 if model_choice == "LM-3" else phi_pipe)
417
+
418
+
419
+ # response, addresses = generate_answer(history[-1][0], choice, retrieval_mode, selected_model)
420
+ # history[-1][1] = ""
421
+
422
+ # for character in response:
423
+ # history[-1][1] += character
424
+ # yield history # Stream each character as it is generated
425
+ # time.sleep(0.05) # Add a slight delay to simulate streaming
426
+
427
+ # yield history # Final yield with the complete response
428
+
429
+
430
+ # def generate_tts_response(history, tts_choice):
431
+ # # Get the most recent bot response from the chat history
432
+ # if history and len(history) > 0:
433
+ # recent_response = history[-1][1] # The second item in the tuple is the bot response text
434
+ # else:
435
+ # recent_response = ""
436
+
437
+ # # Call the TTS function for the recent response
438
+ # with concurrent.futures.ThreadPoolExecutor() as executor:
439
+ # if tts_choice == "Alpha":
440
+ # audio_future = executor.submit(generate_audio_elevenlabs, recent_response)
441
+ # elif tts_choice == "Beta":
442
+ # audio_future = executor.submit(generate_audio_parler_tts, recent_response)
443
+
444
+ # audio_path = audio_future.result()
445
+ # return audio_path
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+ # import concurrent.futures
454
+ # # Existing bot function with concurrent futures for parallel processing
455
+ # def bot(history, choice, tts_choice, retrieval_mode, model_choice):
456
+ # # Initialize an empty response
457
+ # response = ""
458
+
459
+ # # Create a thread pool to handle both text generation and TTS conversion in parallel
460
+ # with concurrent.futures.ThreadPoolExecutor() as executor:
461
+ # # Start the bot response generation in parallel
462
+ # bot_future = executor.submit(generate_bot_response, history, choice, retrieval_mode, model_choice)
463
+
464
+ # # Wait for the text generation to start
465
+ # for history_chunk in bot_future.result():
466
+ # response = history_chunk[-1][1] # Update the response with the current state
467
+ # yield history_chunk, None # Stream the text output as it's generated
468
+
469
+ # # Once text is fully generated, start the TTS conversion
470
+ # tts_future = executor.submit(generate_tts_response, response, tts_choice)
471
+
472
+ # # Get the audio output after TTS is done
473
+ # audio_path = tts_future.result()
474
+
475
+ # # Stream the final text and audio output
476
+ # yield history, audio_path
477
+
478
+
479
+
480
+
481
+
482
+
483
+ # import re
484
+
485
+ # # def clean_response(response_text):
486
+ # # # Remove system and user tags
487
+ # # response_text = re.sub(r'<\|system\|>.*?<\|end\|>', '', response_text, flags=re.DOTALL)
488
+ # # response_text = re.sub(r'<\|user\|>.*?<\|end\|>', '', response_text, flags=re.DOTALL)
489
+ # # response_text = re.sub(r'<\|assistant\|>', '', response_text, flags=re.DOTALL)
490
+
491
+ # # # Extract the document name and page number
492
+ # # document_match = re.search(r"Document\(metadata=\{'source':'(.+?)','page':(\d+)\}", response_text)
493
+ # # if document_match:
494
+ # # document_name = document_match.group(1).split('/')[-1] # Get the document name
495
+ # # page_number = document_match.group(2) # Get the page number
496
+ # # else:
497
+ # # document_name = "Unknown"
498
+ # # page_number = "Unknown"
499
+
500
+ # # # Remove the 'Document(metadata=...' part and keep only the page content
501
+ # # response_text = re.sub(r'Document\(metadata=\{.*?\},page_content=', '', response_text, flags=re.DOTALL)
502
+
503
+ # # # Remove any unwanted escape characters like \u and \u00
504
+ # # response_text = re.sub(r'\\u[0-9A-Fa-f]{4}', '', response_text)
505
+
506
+ # # # Ensure proper spacing between words and dates
507
+ # # response_text = re.sub(r'([a-zA-Z])(\d)', r'\1 \2', response_text)
508
+ # # response_text = re.sub(r'(\d)([a-zA-Z])', r'\1 \2', response_text)
509
+
510
+ # # # Remove the phrase "Sure! The Responses are as follows:" from the actual content
511
+ # # response_text = re.sub(r'Sure! The Responses are as follows:', '', response_text).strip()
512
+
513
+ # # # Clean up the text by removing extra whitespace
514
+ # # cleaned_response = re.sub(r'\s+', ' ', response_text).strip()
515
+
516
+ # # # Format the final response with bullet points
517
+ # # final_response = f"""
518
+ # # Sure! The Responses are as follows:
519
+ # # • Document name - {document_name}
520
+ # # • Page No - {page_number}
521
+ # # • Response - {cleaned_response}
522
+ # # """
523
+
524
+ # # return final_response
525
+
526
+ # def clean_response(response_text):
527
+ # # Remove system and user tags
528
+ # response_text = re.sub(r'<\|system\|>.*?<\|end\|>', '', response_text, flags=re.DOTALL)
529
+ # response_text = re.sub(r'<\|user\|>.*?<\|end\|>', '', response_text, flags=re.DOTALL)
530
+ # response_text = re.sub(r'<\|assistant\|>', '', response_text, flags=re.DOTALL)
531
+
532
+ # # Extract the document name and page number
533
+ # document_match = re.search(r"Document\(metadata=\{'source':'(.+?)','page':(\d+)\}", response_text)
534
+ # if document_match:
535
+ # document_name = document_match.group(1).split('/')[-1] # Get the document name
536
+ # page_number = document_match.group(2) # Get the page number
537
+ # else:
538
+ # document_name = "Unknown"
539
+ # page_number = "Unknown"
540
+
541
+ # # Remove the entire 'Document(metadata=...' and any mention of it from the response
542
+ # response_text = re.sub(r'Document\(metadata=\{.*?\},page_content=', '', response_text, flags=re.DOTALL)
543
+
544
+ # # Remove any mention of "Document:" in the response
545
+ # response_text = re.sub(r'- Document:.*', '', response_text)
546
+
547
+ # # Remove any unwanted escape characters like \u and \u00
548
+ # response_text = re.sub(r'\\u[0-9A-Fa-f]{4}', '', response_text)
549
+
550
+ # # Ensure proper spacing between words and dates
551
+ # response_text = re.sub(r'([a-zA-Z])(\d)', r'\1 \2', response_text)
552
+ # response_text = re.sub(r'(\d)([a-zA-Z])', r'\1 \2', response_text)
553
+
554
+ # # Remove the phrase "Sure! The Responses are as follows:" from the actual content
555
+ # response_text = re.sub(r'Sure! The Responses are as follows:', '', response_text).strip()
556
+
557
+ # # Clean up the text by removing extra whitespace
558
+ # cleaned_response = re.sub(r'\s+', ' ', response_text).strip()
559
+
560
+ # # Format the final response with bullet points
561
+ # final_response = f"""
562
+ # Sure! Here is the response for your Query:
563
+ # • Document name - {document_name}
564
+ # • Page No - {page_number}
565
+ # • Response - {cleaned_response}
566
+ # """
567
+
568
+ # return final_response
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+
580
+
581
+
582
+
583
+ # # Define a new template specifically for GPT-4o-mini in VDB Details mode
584
+ # gpt4o_mini_template_details = f"""
585
+ # As a highly specialized assistant, I provide precise, detailed, and informative responses. On this bright day of {current_date}, I'm equipped to assist with all your queries about Birmingham, Alabama, offering detailed insights tailored to your needs.
586
+ # Given your request, here is the detailed information you're seeking:
587
+ # {{context}}
588
+ # Question: {{question}}
589
+ # Detailed Answer:
590
+ # """
591
+
592
+
593
+
594
+ # import traceback
595
+
596
+ # def generate_answer(message, choice, retrieval_mode, selected_model):
597
+ # logging.debug(f"generate_answer called with choice: {choice}, retrieval_mode: {retrieval_mode}, and selected_model: {selected_model}")
598
+
599
+ # # Logic for disabling options for Phi-3.5
600
+ # if selected_model == "LM-2":
601
+ # choice = None
602
+ # retrieval_mode = None
603
+
604
+ # try:
605
+ # # Select the appropriate template based on the choice and model
606
+ # if choice == "Details" and selected_model == chat_model1: # GPT-4o-mini
607
+ # prompt_template = PromptTemplate(input_variables=["context", "question"], template=gpt4o_mini_template_details)
608
+ # elif choice == "Details":
609
+ # prompt_template = QA_CHAIN_PROMPT_1
610
+ # elif choice == "Conversational":
611
+ # prompt_template = QA_CHAIN_PROMPT_2
612
+ # else:
613
+ # prompt_template = QA_CHAIN_PROMPT_1 # Fallback to template1
614
+
615
+ # # # Handle hotel-related queries
616
+ # # if "hotel" in message.lower() or "hotels" in message.lower() and "birmingham" in message.lower():
617
+ # # logging.debug("Handling hotel-related query")
618
+ # # response = fetch_google_hotels()
619
+ # # logging.debug(f"Hotel response: {response}")
620
+ # # return response, extract_addresses(response)
621
+
622
+ # # # Handle restaurant-related queries
623
+ # # if "restaurant" in message.lower() or "restaurants" in message.lower() and "birmingham" in message.lower():
624
+ # # logging.debug("Handling restaurant-related query")
625
+ # # response = fetch_yelp_restaurants()
626
+ # # logging.debug(f"Restaurant response: {response}")
627
+ # # return response, extract_addresses(response)
628
+
629
+ # # # Handle flight-related queries
630
+ # # if "flight" in message.lower() or "flights" in message.lower() and "birmingham" in message.lower():
631
+ # # logging.debug("Handling flight-related query")
632
+ # # response = fetch_google_flights()
633
+ # # logging.debug(f"Flight response: {response}")
634
+ # # return response, extract_addresses(response)
635
+
636
+ # # Retrieval-based response
637
+ # if retrieval_mode == "VDB":
638
+ # logging.debug("Using VDB retrieval mode")
639
+ # if selected_model == chat_model:
640
+ # logging.debug("Selected model: LM-1")
641
+ # retriever = gpt_retriever
642
+ # context = retriever.get_relevant_documents(message)
643
+ # logging.debug(f"Retrieved context: {context}")
644
+
645
+ # prompt = prompt_template.format(context=context, question=message)
646
+ # logging.debug(f"Generated prompt: {prompt}")
647
+
648
+ # qa_chain = RetrievalQA.from_chain_type(
649
+ # llm=chat_model,
650
+ # chain_type="stuff",
651
+ # retriever=retriever,
652
+ # chain_type_kwargs={"prompt": prompt_template}
653
+ # )
654
+ # response = qa_chain({"query": message})
655
+ # logging.debug(f"LM-1 response: {response}")
656
+ # return response['result'], extract_addresses(response['result'])
657
+
658
+ # elif selected_model == chat_model1:
659
+ # logging.debug("Selected model: LM-3")
660
+ # retriever = gpt_retriever
661
+ # context = retriever.get_relevant_documents(message)
662
+ # logging.debug(f"Retrieved context: {context}")
663
+
664
+ # prompt = prompt_template.format(context=context, question=message)
665
+ # logging.debug(f"Generated prompt: {prompt}")
666
+
667
+ # qa_chain = RetrievalQA.from_chain_type(
668
+ # llm=chat_model1,
669
+ # chain_type="stuff",
670
+ # retriever=retriever,
671
+ # chain_type_kwargs={"prompt": prompt_template}
672
+ # )
673
+ # response = qa_chain({"query": message})
674
+ # logging.debug(f"LM-3 response: {response}")
675
+ # return response['result'], extract_addresses(response['result'])
676
+
677
+ # elif selected_model == phi_pipe:
678
+ # logging.debug("Selected model: LM-2")
679
+ # retriever = phi_retriever
680
+ # context_documents = retriever.get_relevant_documents(message)
681
+ # context = "\n".join([doc.page_content for doc in context_documents])
682
+ # logging.debug(f"Retrieved context for LM-2: {context}")
683
+
684
+ # # Use the correct template variable
685
+ # prompt = phi_custom_template.format(context=context, question=message)
686
+ # logging.debug(f"Generated LM-2 prompt: {prompt}")
687
+
688
+ # response = selected_model(prompt, **{
689
+ # "max_new_tokens": 250,
690
+ # "return_full_text": True,
691
+ # "temperature": 0.0,
692
+ # "do_sample": False,
693
+ # })
694
+
695
+ # if response:
696
+ # generated_text = response[0]['generated_text']
697
+ # logging.debug(f"LM-2 Response: {generated_text}")
698
+ # cleaned_response = clean_response(generated_text)
699
+ # return cleaned_response, extract_addresses(cleaned_response)
700
+ # else:
701
+ # logging.error("LM-2 did not return any response.")
702
+ # return "No response generated.", []
703
+
704
+ # elif retrieval_mode == "KGF":
705
+ # logging.debug("Using KGF retrieval mode")
706
+ # response = chain_neo4j.invoke({"question": message})
707
+ # logging.debug(f"KGF response: {response}")
708
+ # return response, extract_addresses(response)
709
+ # else:
710
+ # logging.error("Invalid retrieval mode selected.")
711
+ # return "Invalid retrieval mode selected.", []
712
+
713
+ # except Exception as e:
714
+ # logging.error(f"Error in generate_answer: {str(e)}")
715
+ # logging.error(traceback.format_exc())
716
+ # return "Sorry, I encountered an error while processing your request.", []
717
+
718
+
719
+
720
+
721
+ # def add_message(history, message):
722
+ # history.append((message, None))
723
+ # return history, gr.Textbox(value="", interactive=True, show_label=False)
724
+
725
+
726
+
727
+ # def print_like_dislike(x: gr.LikeData):
728
+ # print(x.index, x.value, x.liked)
729
+
730
+ # def extract_addresses(response):
731
+ # if not isinstance(response, str):
732
+ # response = str(response)
733
+ # address_patterns = [
734
+ # r'([A-Z].*,\sBirmingham,\sAL\s\d{5})',
735
+ # r'(\d{4}\s.*,\sBirmingham,\sAL\s\d{5})',
736
+ # r'([A-Z].*,\sAL\s\d{5})',
737
+ # r'([A-Z].*,.*\sSt,\sBirmingham,\sAL\s\d{5})',
738
+ # r'([A-Z].*,.*\sStreets,\sBirmingham,\sAL\s\d{5})',
739
+ # r'(\d{2}.*\sStreets)',
740
+ # r'([A-Z].*\s\d{2},\sBirmingham,\sAL\s\d{5})',
741
+ # r'([a-zA-Z]\s Birmingham)',
742
+ # r'([a-zA-Z].*,\sBirmingham,\sAL)',
743
+ # r'(.*),(Birmingham, AL,USA)$'
744
+ # r'(^Birmingham,AL$)',
745
+ # r'((.*)(Stadium|Field),.*,\sAL$)',
746
+ # r'((.*)(Stadium|Field),.*,\sFL$)',
747
+ # r'((.*)(Stadium|Field),.*,\sMS$)',
748
+ # r'((.*)(Stadium|Field),.*,\sAR$)',
749
+ # r'((.*)(Stadium|Field),.*,\sKY$)',
750
+ # r'((.*)(Stadium|Field),.*,\sTN$)',
751
+ # r'((.*)(Stadium|Field),.*,\sLA$)',
752
+ # r'((.*)(Stadium|Field),.*,\sFL$)'
753
+
754
+ # ]
755
+ # addresses = []
756
+ # for pattern in address_patterns:
757
+ # addresses.extend(re.findall(pattern, response))
758
+ # return addresses
759
+
760
+ # all_addresses = []
761
+
762
+ # def generate_map(location_names):
763
+ # global all_addresses
764
+ # all_addresses.extend(location_names)
765
+
766
+ # api_key = os.environ['GOOGLEMAPS_API_KEY']
767
+ # gmaps = GoogleMapsClient(key=api_key)
768
+
769
+ # m = folium.Map(location=[33.5175, -86.809444], zoom_start=12)
770
+
771
+ # for location_name in all_addresses:
772
+ # geocode_result = gmaps.geocode(location_name)
773
+ # if geocode_result:
774
+ # location = geocode_result[0]['geometry']['location']
775
+ # folium.Marker(
776
+ # [location['lat'], location['lng']],
777
+ # tooltip=f"{geocode_result[0]['formatted_address']}"
778
+ # ).add_to(m)
779
+
780
+ # map_html = m._repr_html_()
781
+ # return map_html
782
+
783
+ # from diffusers import DiffusionPipeline
784
+ # import torch
785
+
786
+
787
+ # def fetch_local_news():
788
+ # api_key = os.environ['SERP_API']
789
+ # url = f'https://serpapi.com/search.json?engine=google_news&q=birmingham headline&api_key={api_key}'
790
+ # response = requests.get(url)
791
+ # if response.status_code == 200:
792
+ # results = response.json().get("news_results", [])
793
+ # news_html = """
794
+ # <h2 style="font-family: 'Georgia', serif; color: #ff0000; background-color: #f8f8f8; padding: 10px; border-radius: 10px;">Birmingham Today</h2>
795
+ # <style>
796
+ # .news-item {
797
+ # font-family: 'Verdana', sans-serif;
798
+ # color: #333;
799
+ # background-color: #f0f8ff;
800
+ # margin-bottom: 15px;
801
+ # padding: 10px;
802
+ # border-radius: 5px;
803
+ # transition: box-shadow 0.3s ease, background-color 0.3s ease;
804
+ # font-weight: bold;
805
+ # }
806
+ # .news-item:hover {
807
+ # box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
808
+ # background-color: #e6f7ff;
809
+ # }
810
+ # .news-item a {
811
+ # color: #1E90FF;
812
+ # text-decoration: none;
813
+ # font-weight: bold;
814
+ # }
815
+ # .news-item a:hover {
816
+ # text-decoration: underline;
817
+ # }
818
+ # .news-preview {
819
+ # position: absolute;
820
+ # display: none;
821
+ # border: 1px solid #ccc;
822
+ # border-radius: 5px;
823
+ # box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
824
+ # background-color: white;
825
+ # z-index: 1000;
826
+ # max-width: 300px;
827
+ # padding: 10px;
828
+ # font-family: 'Verdana', sans-serif;
829
+ # color: #333;
830
+ # }
831
+ # </style>
832
+ # <script>
833
+ # function showPreview(event, previewContent) {
834
+ # var previewBox = document.getElementById('news-preview');
835
+ # previewBox.innerHTML = previewContent;
836
+ # previewBox.style.left = event.pageX + 'px';
837
+ # previewBox.style.top = event.pageY + 'px';
838
+ # previewBox.style.display = 'block';
839
+ # }
840
+ # function hidePreview() {
841
+ # var previewBox = document.getElementById('news-preview');
842
+ # previewBox.style.display = 'none';
843
+ # }
844
+ # </script>
845
+ # <div id="news-preview" class="news-preview"></div>
846
+ # """
847
+ # for index, result in enumerate(results[:7]):
848
+ # title = result.get("title", "No title")
849
+ # link = result.get("link", "#")
850
+ # snippet = result.get("snippet", "")
851
+ # news_html += f"""
852
+ # <div class="news-item" onmouseover="showPreview(event, '{snippet}')" onmouseout="hidePreview()">
853
+ # <a href='{link}' target='_blank'>{index + 1}. {title}</a>
854
+ # <p>{snippet}</p>
855
+ # </div>
856
+ # """
857
+ # return news_html
858
+ # else:
859
+ # return "<p>Failed to fetch local news</p>"
860
+
861
+ # import numpy as np
862
+ # import torch
863
+ # from transformers import pipeline, AutoModelForSpeechSeq2Seq, AutoProcessor
864
+
865
+ # model_id = 'openai/whisper-large-v3'
866
+ # device = "cuda:0" if torch.cuda.is_available() else "cpu"
867
+ # torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
868
+ # model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype).to(device)
869
+ # processor = AutoProcessor.from_pretrained(model_id)
870
+
871
+ # pipe_asr = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=15, batch_size=16, torch_dtype=torch_dtype, device=device, return_timestamps=True)
872
+
873
+ # base_audio_drive = "/data/audio"
874
+
875
+ # #Normal Code with sample rate is 44100 Hz
876
+
877
+ # def transcribe_function(stream, new_chunk):
878
+ # try:
879
+ # sr, y = new_chunk[0], new_chunk[1]
880
+ # except TypeError:
881
+ # print(f"Error chunk structure: {type(new_chunk)}, content: {new_chunk}")
882
+ # return stream, "", None
883
+
884
+ # y = y.astype(np.float32) / np.max(np.abs(y))
885
+
886
+ # if stream is not None:
887
+ # stream = np.concatenate([stream, y])
888
+ # else:
889
+ # stream = y
890
+
891
+ # result = pipe_asr({"array": stream, "sampling_rate": sr}, return_timestamps=False)
892
+
893
+ # full_text = result.get("text","")
894
+
895
+ # return stream, full_text, result
896
+
897
+
898
+
899
+
900
+
901
+
902
+
903
+ # # def update_map_with_response(history):
904
+ # # if not history:
905
+ # # return ""
906
+ # # response = history[-1][1]
907
+ # # addresses = extract_addresses(response)
908
+ # # return generate_map(addresses)
909
+
910
+ # def clear_textbox():
911
+ # return ""
912
+
913
+ # # def show_map_if_details(history, choice):
914
+ # # if choice in ["Details", "Conversational"]:
915
+ # # return gr.update(visible=True), update_map_with_response(history)
916
+ # # else:
917
+ # # return gr.update(visible(False), "")
918
+
919
+
920
+
921
+
922
+
923
+
924
+
925
+
926
+ # def generate_audio_elevenlabs(text):
927
+ # XI_API_KEY = os.environ['ELEVENLABS_API']
928
+ # VOICE_ID = 'd9MIrwLnvDeH7aZb61E9'
929
+ # tts_url = f"https://api.elevenlabs.io/v1/text-to-speech/{VOICE_ID}/stream"
930
+ # headers = {
931
+ # "Accept": "application/json",
932
+ # "xi-api-key": XI_API_KEY
933
+ # }
934
+ # data = {
935
+ # "text": str(text),
936
+ # "model_id": "eleven_multilingual_v2",
937
+ # "voice_settings": {
938
+ # "stability": 1.0,
939
+ # "similarity_boost": 0.0,
940
+ # "style": 0.60,
941
+ # "use_speaker_boost": False
942
+ # }
943
+ # }
944
+ # response = requests.post(tts_url, headers=headers, json=data, stream=True)
945
+ # if response.ok:
946
+ # audio_segments = []
947
+ # with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as f:
948
+ # for chunk in response.iter_content(chunk_size=1024):
949
+ # if chunk:
950
+ # f.write(chunk)
951
+ # audio_segments.append(chunk)
952
+ # temp_audio_path = f.name
953
+
954
+ # # Combine all audio chunks into a single file
955
+ # combined_audio = AudioSegment.from_file(temp_audio_path, format="mp3")
956
+ # combined_audio_path = os.path.join(tempfile.gettempdir(), "elevenlabs_combined_audio.mp3")
957
+ # combined_audio.export(combined_audio_path, format="mp3")
958
+
959
+ # logging.debug(f"Audio saved to {combined_audio_path}")
960
+ # return combined_audio_path
961
+ # else:
962
+ # logging.error(f"Error generating audio: {response.text}")
963
+ # return None
964
+
965
+
966
+
967
+
968
+ # # chunking audio and then Process
969
+
970
+ # import concurrent.futures
971
+ # import tempfile
972
+ # import os
973
+ # import numpy as np
974
+ # import logging
975
+ # from queue import Queue
976
+ # from threading import Thread
977
+ # from scipy.io.wavfile import write as write_wav
978
+ # from parler_tts import ParlerTTSForConditionalGeneration, ParlerTTSStreamer
979
+ # from transformers import AutoTokenizer
980
+
981
+ # # Ensure your device is set to CUDA
982
+ # device = "cuda:0" if torch.cuda.is_available() else "cpu"
983
+
984
+ # repo_id = "parler-tts/parler-tts-mini-v1"
985
+
986
+ # def generate_audio_parler_tts(text):
987
+ # description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
988
+ # chunk_size_in_s = 0.5
989
+
990
+ # # Initialize the tokenizer and model
991
+ # parler_tokenizer = AutoTokenizer.from_pretrained(repo_id)
992
+ # parler_model = ParlerTTSForConditionalGeneration.from_pretrained(repo_id).to(device)
993
+ # sampling_rate = parler_model.audio_encoder.config.sampling_rate
994
+ # frame_rate = parler_model.audio_encoder.config.frame_rate
995
+
996
+ # def generate(text, description, play_steps_in_s=0.5):
997
+ # play_steps = int(frame_rate * play_steps_in_s)
998
+ # streamer = ParlerTTSStreamer(parler_model, device=device, play_steps=play_steps)
999
+
1000
+ # inputs = parler_tokenizer(description, return_tensors="pt").to(device)
1001
+ # prompt = parler_tokenizer(text, return_tensors="pt").to(device)
1002
+
1003
+ # generation_kwargs = dict(
1004
+ # input_ids=inputs.input_ids,
1005
+ # prompt_input_ids=prompt.input_ids,
1006
+ # attention_mask=inputs.attention_mask,
1007
+ # prompt_attention_mask=prompt.attention_mask,
1008
+ # streamer=streamer,
1009
+ # do_sample=True,
1010
+ # temperature=1.0,
1011
+ # min_new_tokens=10,
1012
+ # )
1013
+
1014
+ # thread = Thread(target=parler_model.generate, kwargs=generation_kwargs)
1015
+ # thread.start()
1016
+
1017
+ # for new_audio in streamer:
1018
+ # if new_audio.shape[0] == 0:
1019
+ # break
1020
+ # # Save or process each audio chunk as it is generated
1021
+ # yield sampling_rate, new_audio
1022
+
1023
+ # audio_segments = []
1024
+ # for (sampling_rate, audio_chunk) in generate(text, description, chunk_size_in_s):
1025
+ # audio_segments.append(audio_chunk)
1026
+
1027
+ # temp_audio_path = os.path.join(tempfile.gettempdir(), f"parler_tts_audio_chunk_{len(audio_segments)}.wav")
1028
+ # write_wav(temp_audio_path, sampling_rate, audio_chunk.astype(np.float32))
1029
+ # logging.debug(f"Saved chunk to {temp_audio_path}")
1030
+
1031
+
1032
+ # # Combine all the audio chunks into one audio file
1033
+ # combined_audio = np.concatenate(audio_segments)
1034
+ # combined_audio_path = os.path.join(tempfile.gettempdir(), "parler_tts_combined_audio_stream.wav")
1035
+
1036
+ # write_wav(combined_audio_path, sampling_rate, combined_audio.astype(np.float32))
1037
+
1038
+ # logging.debug(f"Combined audio saved to {combined_audio_path}")
1039
+ # return combined_audio_path
1040
+
1041
+
1042
+ # def fetch_local_events():
1043
+ # api_key = os.environ['SERP_API']
1044
+ # url = f'https://serpapi.com/search.json?engine=google_events&q=Events+in+Birmingham&hl=en&gl=us&api_key={api_key}'
1045
+ # response = requests.get(url)
1046
+ # if response.status_code == 200:
1047
+ # events_results = response.json().get("events_results", [])
1048
+ # events_html = """
1049
+ # <h2 style="font-family: 'Georgia', serif; color: #ff0000; background-color: #f8f8f8; padding: 10px; border-radius: 10px;">Local Events</h2>
1050
+ # <style>
1051
+ # table {
1052
+ # font-family: 'Verdana', sans-serif;
1053
+ # color: #333;
1054
+ # border-collapse: collapse;
1055
+ # width: 100%;
1056
+ # }
1057
+ # th, td {
1058
+ # border: 1px solid #fff !important;
1059
+ # padding: 8px;
1060
+ # }
1061
+ # th {
1062
+ # background-color: #f2f2f2;
1063
+ # color: #333;
1064
+ # text-align: left;
1065
+ # }
1066
+ # tr:hover {
1067
+ # background-color: #f5f5f5;
1068
+ # }
1069
+ # .event-link {
1070
+ # color: #1E90FF;
1071
+ # text-decoration: none;
1072
+ # }
1073
+ # .event-link:hover {
1074
+ # text-decoration: underline;
1075
+ # }
1076
+ # </style>
1077
+ # <table>
1078
+ # <tr>
1079
+ # <th>Title</th>
1080
+ # <th>Date and Time</th>
1081
+ # <th>Location</th>
1082
+ # </tr>
1083
+ # """
1084
+ # for event in events_results:
1085
+ # title = event.get("title", "No title")
1086
+ # date_info = event.get("date", {})
1087
+ # date = f"{date_info.get('start_date', '')} {date_info.get('when', '')}".replace("{", "").replace("}", "")
1088
+ # location = event.get("address", "No location")
1089
+ # if isinstance(location, list):
1090
+ # location = " ".join(location)
1091
+ # location = location.replace("[", "").replace("]", "")
1092
+ # link = event.get("link", "#")
1093
+ # events_html += f"""
1094
+ # <tr>
1095
+ # <td><a class='event-link' href='{link}' target='_blank'>{title}</a></td>
1096
+ # <td>{date}</td>
1097
+ # <td>{location}</td>
1098
+ # </tr>
1099
+ # """
1100
+ # events_html += "</table>"
1101
+ # return events_html
1102
+ # else:
1103
+ # return "<p>Failed to fetch local events</p>"
1104
+
1105
+ # def get_weather_icon(condition):
1106
+ # condition_map = {
1107
+ # "Clear": "c01d",
1108
+ # "Partly Cloudy": "c02d",
1109
+ # "Cloudy": "c03d",
1110
+ # "Overcast": "c04d",
1111
+ # "Mist": "a01d",
1112
+ # "Patchy rain possible": "r01d",
1113
+ # "Light rain": "r02d",
1114
+ # "Moderate rain": "r03d",
1115
+ # "Heavy rain": "r04d",
1116
+ # "Snow": "s01d",
1117
+ # "Thunderstorm": "t01d",
1118
+ # "Fog": "a05d",
1119
+ # }
1120
+ # return condition_map.get(condition, "c04d")
1121
+
1122
+ # def fetch_local_weather():
1123
+ # try:
1124
+ # api_key = os.environ['WEATHER_API']
1125
+ # url = f'https://weather.visualcrossing.com/VisualCrossingWebServices/rest/services/timeline/birmingham?unitGroup=metric&include=events%2Calerts%2Chours%2Cdays%2Ccurrent&key={api_key}'
1126
+ # response = requests.get(url)
1127
+ # response.raise_for_status()
1128
+ # jsonData = response.json()
1129
+
1130
+ # current_conditions = jsonData.get("currentConditions", {})
1131
+ # temp_celsius = current_conditions.get("temp", "N/A")
1132
+
1133
+ # if temp_celsius != "N/A":
1134
+ # temp_fahrenheit = int((temp_celsius * 9/5) + 32)
1135
+ # else:
1136
+ # temp_fahrenheit = "N/A"
1137
+
1138
+ # condition = current_conditions.get("conditions", "N/A")
1139
+ # humidity = current_conditions.get("humidity", "N/A")
1140
+
1141
+ # weather_html = f"""
1142
+ # <div class="weather-theme">
1143
+ # <h2 style="font-family: 'Georgia', serif; color: #ff0000; background-color: #f8f8f8; padding: 10px; border-radius: 10px;">Local Weather</h2>
1144
+ # <div class="weather-content">
1145
+ # <div class="weather-icon">
1146
+ # <img src="https://www.weatherbit.io/static/img/icons/{get_weather_icon(condition)}.png" alt="{condition}" style="width: 100px; height: 100px;">
1147
+ # </div>
1148
+ # <div class="weather-details">
1149
+ # <p style="font-family: 'Verdana', sans-serif; color: #333; font-size: 1.2em;">Temperature: {temp_fahrenheit}°F</p>
1150
+ # <p style="font-family: 'Verdana', sans-serif; color: #333; font-size: 1.2em;">Condition: {condition}</p>
1151
+ # <p style="font-family: 'Verdana', sans-serif; color: #333; font-size: 1.2em;">Humidity: {humidity}%</p>
1152
+ # </div>
1153
+ # </div>
1154
+ # </div>
1155
+ # <style>
1156
+ # .weather-theme {{
1157
+ # animation: backgroundAnimation 10s infinite alternate;
1158
+ # border-radius: 10px;
1159
+ # padding: 10px;
1160
+ # margin-bottom: 15px;
1161
+ # background: linear-gradient(45deg, #ffcc33, #ff6666, #ffcc33, #ff6666);
1162
+ # background-size: 400% 400%;
1163
+ # box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
1164
+ # transition: box-shadow 0.3s ease, background-color 0.3s ease;
1165
+ # }}
1166
+ # .weather-theme:hover {{
1167
+ # box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2);
1168
+ # background-position: 100% 100%;
1169
+ # }}
1170
+ # @keyframes backgroundAnimation {{
1171
+ # 0% {{ background-position: 0% 50%; }}
1172
+ # 100% {{ background-position: 100% 50%; }}
1173
+ # }}
1174
+ # .weather-content {{
1175
+ # display: flex;
1176
+ # align-items: center;
1177
+ # }}
1178
+ # .weather-icon {{
1179
+ # flex: 1;
1180
+ # }}
1181
+ # .weather-details {{
1182
+ # flex 3;
1183
+ # }}
1184
+ # </style>
1185
+ # """
1186
+ # return weather_html
1187
+ # except requests.exceptions.RequestException as e:
1188
+ # return f"<p>Failed to fetch local weather: {e}</p>"
1189
+
1190
+
1191
+ # def handle_retrieval_mode_change(choice):
1192
+ # if choice == "KGF":
1193
+ # return gr.update(interactive=False), gr.update(interactive=False)
1194
+ # else:
1195
+ # return gr.update(interactive=True), gr.update(interactive=True)
1196
+
1197
+
1198
+
1199
+ # def handle_model_choice_change(selected_model):
1200
+ # if selected_model == "LM-2":
1201
+ # # Disable retrieval mode and select style when LM-2 is selected
1202
+ # return gr.update(interactive=False), gr.update(interactive=False), gr.update(interactive=False)
1203
+ # elif selected_model == "LM-1":
1204
+ # # Enable retrieval mode and select style for LM-1
1205
+ # return gr.update(interactive=True), gr.update(interactive=True), gr.update(interactive=True)
1206
+ # else:
1207
+ # # Default case: allow interaction
1208
+ # return gr.update(interactive=True), gr.update(interactive=True), gr.update(interactive=True)
1209
+
1210
+ # # def handle_model_choice_change(selected_model):
1211
+ # # if selected_model == "LM-2": # When LM-2 (Phi-3.5) is selected
1212
+ # # # Disable retrieval mode and select style when LM-2 is selected
1213
+ # # return (
1214
+ # # gr.update(interactive=False), # Disable retrieval mode
1215
+ # # gr.update(interactive=False), # Disable style (Details/Conversational)
1216
+ # # gr.update(interactive=False) # Disable the model choice itself
1217
+ # # )
1218
+ # # else:
1219
+ # # # Disable GPT-4o, GPT-4o-mini, and KGF, only Phi-3.5 works
1220
+ # # return (
1221
+ # # gr.update(interactive=True), # Allow retrieval mode for other models
1222
+ # # gr.update(interactive=True), # Allow style options for other models
1223
+ # # gr.update(interactive=True) # Allow other models to be selected
1224
+ # # )
1225
+
1226
+ # #Flux Coding
1227
+
1228
+
1229
+ # # Existing prompts for the Flux API
1230
+ # hardcoded_prompt_1 = "A high quality cinematic image for Toyota Truck in Birmingham skyline shot in the style of Michael Mann"
1231
+ # hardcoded_prompt_2 = "A high quality cinematic image for Alabama Quarterback close up emotional shot in the style of Michael Mann"
1232
+ # hardcoded_prompt_3 = "A high quality cinematic image for Taylor Swift concert in Birmingham skyline style of Michael Mann"
1233
+
1234
+ # # Function to call the Flux API and generate images
1235
+ # def generate_image_flux(prompt):
1236
+ # # client = Client("black-forest-labs/FLUX.1-schnell",hf_token=hf_token)
1237
+ # client = Client("Pijush2023/radar_flux")
1238
+ # result = client.predict(
1239
+ # prompt=prompt,
1240
+ # seed=0,
1241
+ # randomize_seed=True,
1242
+ # width=400,
1243
+ # height=400,
1244
+ # num_inference_steps=2,
1245
+ # api_name="/infer"
1246
+ # )
1247
+
1248
+ # # Assuming that the API response contains an image file or URL, extract the image part
1249
+ # if isinstance(result, tuple):
1250
+ # # Extract the image URL or path if it is a tuple
1251
+ # image_path_or_url = result[0] # Adjust this index based on the actual structure of the response
1252
+ # else:
1253
+ # image_path_or_url = result
1254
+
1255
+ # return image_path_or_url # Return the image path or URL directly
1256
+
1257
+ # # Function to update images with the three prompts
1258
+ # def update_images():
1259
+ # image_1 = generate_image_flux(hardcoded_prompt_1)
1260
+ # image_2 = generate_image_flux(hardcoded_prompt_2)
1261
+ # image_3 = generate_image_flux(hardcoded_prompt_3)
1262
+ # return image_1, image_2, image_3
1263
+
1264
+
1265
+
1266
+
1267
+
1268
+ # def format_restaurant_hotel_info(name, link, location, phone, rating, reviews, snippet):
1269
+ # return f"""
1270
+ # {name}
1271
+ # - Link: {link}
1272
+ # - Location: {location}
1273
+ # - Contact No: {phone}
1274
+ # - Rating: {rating} stars ({reviews} reviews)
1275
+ # - Snippet: {snippet}
1276
+ # """
1277
+
1278
+ # def fetch_yelp_restaurants():
1279
+ # # Introductory prompt for restaurants
1280
+ # intro_prompt = "Here are some of the top-rated restaurants in Birmingham, Alabama. I hope these suggestions help you find the perfect place to enjoy your meal:"
1281
+
1282
+ # params = {
1283
+ # "engine": "yelp",
1284
+ # "find_desc": "Restaurant",
1285
+ # "find_loc": "Birmingham, AL, USA",
1286
+ # "api_key": os.getenv("SERP_API")
1287
+ # }
1288
+
1289
+ # search = GoogleSearch(params)
1290
+ # results = search.get_dict()
1291
+ # organic_results = results.get("organic_results", [])
1292
+
1293
+ # response_text = f"{intro_prompt}\n"
1294
+
1295
+ # for result in organic_results[:5]: # Limiting to top 5 restaurants
1296
+ # name = result.get("title", "No name")
1297
+ # rating = result.get("rating", "No rating")
1298
+ # reviews = result.get("reviews", "No reviews")
1299
+ # phone = result.get("phone", "Not Available")
1300
+ # snippet = result.get("snippet", "Not Available")
1301
+ # location = f"{name}, Birmingham, AL,USA"
1302
+ # link = result.get("link", "#")
1303
+
1304
+ # response_text += format_restaurant_hotel_info(name, link, location, phone, rating, reviews, snippet)
1305
+
1306
+
1307
+ # return response_text
1308
+
1309
+
1310
+
1311
+
1312
+
1313
+
1314
+ # def format_hotel_info(name, link, location, rate_per_night, total_rate, description, check_in_time, check_out_time, amenities):
1315
+ # return f"""
1316
+ # {name}
1317
+ # - Link: {link}
1318
+ # - Location: {location}
1319
+ # - Rate per Night: {rate_per_night} (Before taxes/fees: {total_rate})
1320
+ # - Check-in Time: {check_in_time}
1321
+ # - Check-out Time: {check_out_time}
1322
+ # - Amenities: {amenities}
1323
+ # - Description: {description}
1324
+ # """
1325
+
1326
+ # def fetch_google_hotels(query="Birmingham Hotel", check_in=current_date1, check_out="2024-09-02", adults=2):
1327
+ # # Introductory prompt for hotels
1328
+ # intro_prompt = "Here are some of the best hotels in Birmingham, Alabama, for your stay. Each of these options offers a unique experience, whether you're looking for luxury, comfort, or convenience:"
1329
+
1330
+ # params = {
1331
+ # "engine": "google_hotels",
1332
+ # "q": query,
1333
+ # "check_in_date": check_in,
1334
+ # "check_out_date": check_out,
1335
+ # "adults": str(adults),
1336
+ # "currency": "USD",
1337
+ # "gl": "us",
1338
+ # "hl": "en",
1339
+ # "api_key": os.getenv("SERP_API")
1340
+ # }
1341
+
1342
+ # search = GoogleSearch(params)
1343
+ # results = search.get_dict()
1344
+ # hotel_results = results.get("properties", [])
1345
+
1346
+ # hotel_info = f"{intro_prompt}\n"
1347
+ # for hotel in hotel_results[:5]: # Limiting to top 5 hotels
1348
+ # name = hotel.get('name', 'No name')
1349
+ # description = hotel.get('description', 'No description')
1350
+ # link = hotel.get('link', '#')
1351
+ # check_in_time = hotel.get('check_in_time', 'N/A')
1352
+ # check_out_time = hotel.get('check_out_time', 'N/A')
1353
+ # rate_per_night = hotel.get('rate_per_night', {}).get('lowest', 'N/A')
1354
+ # before_taxes_fees = hotel.get('rate_per_night', {}).get('before_taxes_fees', 'N/A')
1355
+ # total_rate = hotel.get('total_rate', {}).get('lowest', 'N/A')
1356
+ # amenities = ", ".join(hotel.get('amenities', [])) if hotel.get('amenities') else "Not Available"
1357
+
1358
+ # location = f"{name}, Birmingham, AL,USA"
1359
+
1360
+ # hotel_info += format_hotel_info(
1361
+ # name,
1362
+ # link,
1363
+ # location,
1364
+ # rate_per_night,
1365
+ # total_rate,
1366
+ # description,
1367
+ # check_in_time,
1368
+ # check_out_time,
1369
+ # amenities
1370
+ # )
1371
+
1372
+
1373
+ # return hotel_info
1374
+
1375
+
1376
+
1377
+
1378
+ # def format_flight_info(flight_number, departure_airport, departure_time, arrival_airport, arrival_time, duration, airplane):
1379
+ # return f"""
1380
+ # Flight {flight_number}
1381
+ # - Departure: {departure_airport} at {departure_time}
1382
+ # - Arrival: {arrival_airport} at {arrival_time}
1383
+ # - Duration: {duration} minutes
1384
+ # - Airplane: {airplane}
1385
+ # """
1386
+
1387
+ # def fetch_google_flights(departure_id="JFK", arrival_id="BHM", outbound_date=current_date1, return_date="2024-08-20"):
1388
+ # # Introductory prompt for flights
1389
+ # intro_prompt = "Here are some available flights from JFK to Birmingham, Alabama. These options provide a range of times and durations to fit your travel needs:"
1390
+
1391
+ # params = {
1392
+ # "engine": "google_flights",
1393
+ # "departure_id": departure_id,
1394
+ # "arrival_id": arrival_id,
1395
+ # "outbound_date": outbound_date,
1396
+ # "return_date": return_date,
1397
+ # "currency": "USD",
1398
+ # "hl": "en",
1399
+ # "api_key": os.getenv("SERP_API")
1400
+ # }
1401
+
1402
+ # search = GoogleSearch(params)
1403
+ # results = search.get_dict()
1404
+
1405
+ # # Extract flight details from the results
1406
+ # best_flights = results.get('best_flights', [])
1407
+ # flight_info = f"{intro_prompt}\n"
1408
+
1409
+ # # Process each flight in the best_flights list
1410
+ # for i, flight in enumerate(best_flights, start=1):
1411
+ # for segment in flight.get('flights', []):
1412
+ # departure_airport = segment.get('departure_airport', {}).get('name', 'Unknown Departure Airport')
1413
+ # departure_time = segment.get('departure_airport', {}).get('time', 'Unknown Time')
1414
+ # arrival_airport = segment.get('arrival_airport', {}).get('name', 'Unknown Arrival Airport')
1415
+ # arrival_time = segment.get('arrival_airport', {}).get('time', 'Unknown Time')
1416
+ # duration = segment.get('duration', 'Unknown Duration')
1417
+ # airplane = segment.get('airplane', 'Unknown Airplane')
1418
+
1419
+ # # Format the flight segment details
1420
+ # flight_info += format_flight_info(
1421
+ # flight_number=i,
1422
+ # departure_airport=departure_airport,
1423
+ # departure_time=departure_time,
1424
+ # arrival_airport=arrival_airport,
1425
+ # arrival_time=arrival_time,
1426
+ # duration=duration,
1427
+ # airplane=airplane
1428
+ # )
1429
+
1430
+
1431
+ # return flight_info
1432
+
1433
+
1434
+ # # examples = [
1435
+ # # [
1436
+ # # "What are the concerts in Birmingham?",
1437
+ # # ],
1438
+ # # [
1439
+ # # "what are some of the upcoming matches of crimson tide?",
1440
+ # # ],
1441
+ # # [
1442
+ # # "where from i will get a Hamburger?",
1443
+ # # ],
1444
+ # # [
1445
+ # # "What are some of the hotels at birmingham?",
1446
+ # # ],
1447
+ # # [
1448
+ # # "how can i connect the alexa to the radio?"
1449
+ # # ],
1450
+ # # [
1451
+ # # "What are some of the good clubs at birmingham?"
1452
+ # # ],
1453
+ # # [
1454
+ # # "How do I call the radio station?",
1455
+ # # ],
1456
+ # # [
1457
+ # # "What’s the spread?"
1458
+ # # ],
1459
+ # # [
1460
+ # # "What time is Crimson Tide Rewind?"
1461
+ # # ],
1462
+ # # [
1463
+ # # "What time is Alabama kick-off?"
1464
+ # # ],
1465
+ # # [
1466
+ # # "who are some of the popular players of crimson tide?"
1467
+ # # ]
1468
+ # # ]
1469
+
1470
+ # # # Function to insert the prompt into the textbox when clicked
1471
+ # # def insert_prompt(current_text, prompt):
1472
+ # # return prompt[0] if prompt else current_text
1473
+ # # List of files for dropdown
1474
+
1475
+ # files_list = [
1476
+ # "176769_000219051794_V2.pdf",
1477
+ # "177115_000230633979_V2.pdf",
1478
+ # "177273_000219963851_V2.pdf",
1479
+ # "177429_000219726050_V2.pdf",
1480
+ # "177495_000219962533_V2.pdf",
1481
+ # "178041_000197000852_V2.pdf"
1482
+ # ]
1483
+
1484
+
1485
+
1486
+
1487
+
1488
+
1489
+
1490
+ # with gr.Blocks(theme='gradio/soft') as demo:
1491
+
1492
+
1493
+ # with gr.Row():
1494
+ # with gr.Column():
1495
+ # state = gr.State()
1496
+
1497
+ # # File dropdown
1498
+ # # file_dropdown = gr.Dropdown(label="Select Document", choices=files_list, value=None)
1499
+
1500
+ # chatbot = gr.Chatbot([], elem_id="RADAR:Channel 94.1", bubble_full_width=False)
1501
+ # choice = gr.Radio(label="Select Style", choices=["Details", "Conversational"], value="Conversational",interactive=False,visible=False)
1502
+ # retrieval_mode = gr.Radio(label="Retrieval Mode", choices=["VDB", "KGF"], value="VDB",interactive=False,visible=False)
1503
+ # model_choice = gr.Dropdown(label="Choose Model", choices=["LM-2"], value="LM-2")
1504
+
1505
+ # # Link the dropdown change to handle_model_choice_change
1506
+ # model_choice.change(fn=handle_model_choice_change, inputs=model_choice, outputs=[retrieval_mode, choice, choice])
1507
+
1508
+ # # gr.Markdown("<h1 style='color: red;'>Talk to RADAR</h1>", elem_id="voice-markdown")
1509
+
1510
+ # chat_input = gr.Textbox(show_copy_button=True, interactive=True, show_label=False, label="ASK Radar !!!")
1511
+ # tts_choice = gr.Radio(label="Select TTS System", choices=["Alpha", "Beta"], value="Alpha")
1512
+
1513
+ # retriever_button = gr.Button("Retriever")
1514
+
1515
+ # clear_button = gr.Button("Clear")
1516
+ # clear_button.click(lambda: [None, None], outputs=[chat_input, state])
1517
+
1518
+ # # gr.Markdown("<h1 style='color: red;'>Radar Map</h1>", elem_id="Map-Radar")
1519
+ # # location_output = gr.HTML()
1520
+ # audio_output = gr.Audio(interactive=False, autoplay=True)
1521
+
1522
+ # def stop_audio():
1523
+ # audio_output.stop()
1524
+ # return None
1525
+
1526
+
1527
+
1528
+
1529
+
1530
+ # retriever_sequence = (
1531
+ # retriever_button.click(fn=stop_audio, inputs=[], outputs=[audio_output], api_name="api_stop_audio_recording")
1532
+ # .then(fn=add_message, inputs=[chatbot, chat_input], outputs=[chatbot, chat_input], api_name="api_addprompt_chathistory")
1533
+ # # First, generate the bot response
1534
+ # .then(fn=generate_bot_response, inputs=[chatbot, choice, retrieval_mode, model_choice], outputs=[chatbot], api_name="api_generate_bot_response")
1535
+ # # Then, generate the TTS response based on the bot's response
1536
+ # .then(fn=generate_tts_response, inputs=[chatbot, tts_choice], outputs=[audio_output], api_name="api_generate_tts_response")
1537
+ # .then(fn=clear_textbox, inputs=[], outputs=[chat_input], api_name="api_clear_textbox")
1538
+ # )
1539
+
1540
+
1541
+
1542
+
1543
+
1544
+
1545
+
1546
+
1547
+ # chat_input.submit(fn=stop_audio, inputs=[], outputs=[audio_output], api_name="api_stop_audio_recording").then(
1548
+ # fn=add_message, inputs=[chatbot, chat_input], outputs=[chatbot, chat_input], api_name="api_addprompt_chathistory"
1549
+ # ).then(
1550
+ # # First, generate the bot response
1551
+ # fn=generate_bot_response, inputs=[chatbot, choice, retrieval_mode, model_choice], outputs=[chatbot], api_name="api_generate_bot_response"
1552
+ # ).then(
1553
+ # # Then, generate the TTS response based on the bot's response
1554
+ # fn=generate_tts_response, inputs=[chatbot, tts_choice], outputs=[audio_output], api_name="api_generate_tts_response"
1555
+ # ).then(
1556
+ # fn=clear_textbox, inputs=[], outputs=[chat_input], api_name="api_clear_textbox"
1557
+ # )
1558
+
1559
+
1560
+
1561
+
1562
+
1563
+
1564
+
1565
+ # audio_input = gr.Audio(sources=["microphone"], streaming=True, type='numpy', every=0.1)
1566
+ # audio_input.stream(transcribe_function, inputs=[state, audio_input], outputs=[state, chat_input], api_name="api_voice_to_text")
1567
+
1568
+ # # gr.Markdown("<h1 style='color: red;'>Example Prompts</h1>", elem_id="Example-Prompts")
1569
+ # # gr.Examples(examples=examples, fn=insert_prompt,inputs=chat_input, outputs=chat_input)
1570
+
1571
+ # # with gr.Column():
1572
+ # # weather_output = gr.HTML(value=fetch_local_weather())
1573
+ # # news_output = gr.HTML(value=fetch_local_news())
1574
+ # # events_output = gr.HTML(value=fetch_local_events())
1575
+
1576
+ # # with gr.Column():
1577
+
1578
+
1579
+ # # # Call update_images during the initial load to display images when the interface appears
1580
+ # # initial_images = update_images()
1581
+
1582
+ # # # Displaying the images generated using Flux API directly
1583
+ # # image_output_1 = gr.Image(value=initial_images[0], label="Image 1", elem_id="flux_image_1", width=400, height=400)
1584
+ # # image_output_2 = gr.Image(value=initial_images[1], label="Image 2", elem_id="flux_image_2", width=400, height=400)
1585
+ # # image_output_3 = gr.Image(value=initial_images[2], label="Image 3", elem_id="flux_image_3", width=400, height=400)
1586
+
1587
+ # # # Refresh button to update images
1588
+ # # refresh_button = gr.Button("Refresh Images")
1589
+ # # refresh_button.click(fn=update_images, inputs=None, outputs=[image_output_1, image_output_2, image_output_3])
1590
+
1591
+
1592
+
1593
+
1594
+
1595
+
1596
+
1597
+ # demo.queue()
1598
+ # demo.launch(show_error=True)
1599
  from gradio_client import Client
1600
  import gradio as gr
1601
  import requests
 
1700
  torch_dtype="auto",
1701
  trust_remote_code=True,
1702
  )
 
 
 
 
 
 
1703
  tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3.5-mini-instruct")
 
 
1704
  return pipeline("text-generation", model=model, tokenizer=tokenizer)
1705
 
1706
  def initialize_gpt_model():
 
1723
 
1724
 
1725
  # Existing embeddings and vector store for GPT-4o
1726
+ gpt_embeddings = OpenAIEmbeddings(api_key=os.environ['OPENAI_API_KEY'])
1727
+ gpt_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=embeddings)
 
1728
  gpt_retriever = gpt_vectorstore.as_retriever(search_kwargs={'k': 5})
1729
 
1730
  # New vector store setup for Phi-3.5
1731
  phi_embeddings = embeddings
1732
+ phi_vectorstore = PineconeVectorStore(index_name="italyv109102024", embedding=embeddings)
1733
  phi_retriever = phi_vectorstore.as_retriever(search_kwargs={'k': 5})
1734
 
1735
 
 
1741
  pc = Pinecone(api_key=os.environ['PINECONE_API_KEY'])
1742
 
1743
  index_name = "italyv109102024"
 
1744
  vectorstore = PineconeVectorStore(index_name=index_name, embedding=embeddings)
1745
  retriever = vectorstore.as_retriever(search_kwargs={'k': 5})
1746
 
 
1787
  Question: {{question}}
1788
  Helpful Answer:"""
1789
 
1790
+ template2 =f"""Hello there! As your friendly and knowledgeable guide here in Birmingham, Alabama . I'm here to help you discover the best experiences this beautiful city has to offer. It's a bright and sunny day today, {current_date}, and I’m excited to assist you with any insights or recommendations you need.
1791
+ Whether you're looking for local events, sports ,clubs,concerts etc or just a great place to grab a bite, I've got you covered.Keep your response casual, short and sweet for the quickest response.Don't reveal the location and give the response in a descriptive way, I'm here to help make your time in Birmingham unforgettable!
1792
+ "It’s always a pleasure to assist you!"
 
 
 
 
1793
  {{context}}
1794
  Question: {{question}}
1795
  Helpful Answer:"""
 
1951
  # Sure! Here's the information:
1952
  # """
1953
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1954
  phi_custom_template = """
1955
  <|system|>
1956
+ Sei un esperto della lingua italiana e un madrelingua italiano. Il tuo compito è fornire risposte concise, dirette e brevi basate sul documento fornito. Dovresti restituire le informazioni nel seguente formato:
1957
+ - Nome del documento: (il nome del documento)
 
1958
  - Numero di pagina: (numero di pagina)
1959
+ - Contenuto effettivo: (contenuto rilevante del documento)
1960
+ Alla fine, fornisci una sezione separata per la risposta nel seguente formato:
1961
+ - Risposta: (la risposta alla domanda)
1962
+ Se non riesci a trovare la risposta nel documento, rispondi semplicemente con "Questa domanda è al di là delle mie conoscenze". Ecco i dettagli del documento da considerare:
1963
  <|end|>
 
1964
  <|user|>
1965
  {context}
1966
  Question: {question}<|end|>
 
1967
  <|assistant|>
1968
  Sure! The Responses are as follows:
1969
  """
 
2236
  logging.debug(f"LM-3 response: {response}")
2237
  return response['result'], extract_addresses(response['result'])
2238
 
2239
+
2240
+
2241
  elif selected_model == phi_pipe:
2242
  logging.debug("Selected model: LM-2")
2243
  retriever = phi_retriever
 
2252
  response = selected_model(prompt, **{
2253
  "max_new_tokens": 250,
2254
  "return_full_text": True,
2255
+ "temperature": 0.1,
2256
+ "do_sample": True,
2257
  })
2258
 
2259
  if response:
 
2279
  logging.error(traceback.format_exc())
2280
  return "Sorry, I encountered an error while processing your request.", []
2281
 
2282
+ # def generate_answer(message, choice, retrieval_mode, selected_model):
2283
+ # # Logic for Phi-3.5
2284
+ # if selected_model == phi_pipe: # LM-2 Phi-3.5 selected
2285
+ # retriever = phi_retriever
2286
+ # context_documents = retriever.get_relevant_documents(message)
2287
+ # context = "\n".join([doc.page_content for doc in context_documents])
2288
+
2289
+ # # Use the correct template for Phi-3.5
2290
+ # prompt = phi_custom_template.format(context=context, question=message)
2291
+
2292
+ # response = selected_model(prompt, **{
2293
+ # "max_new_tokens": 400,
2294
+ # "return_full_text": True,
2295
+ # "temperature": 0.7,
2296
+ # "do_sample": True,
2297
+ # })
2298
+
2299
+ # if response:
2300
+ # generated_text = response[0]['generated_text']
2301
+ # cleaned_response = clean_response(generated_text)
2302
+ # # return cleaned_response, extract_addresses(cleaned_response)
2303
+ # return cleaned_response
2304
+ # else:
2305
+ # return "No response generated.", []
2306
+
2307
+
2308
+
2309
+
2310
 
2311
 
2312
 
 
3062
  # # Function to insert the prompt into the textbox when clicked
3063
  # def insert_prompt(current_text, prompt):
3064
  # return prompt[0] if prompt else current_text
 
 
 
 
 
 
 
 
 
 
 
3065
 
3066
 
3067
 
 
3075
  with gr.Column():
3076
  state = gr.State()
3077
 
 
 
 
3078
  chatbot = gr.Chatbot([], elem_id="RADAR:Channel 94.1", bubble_full_width=False)
3079
  choice = gr.Radio(label="Select Style", choices=["Details", "Conversational"], value="Conversational",interactive=False,visible=False)
3080
  retrieval_mode = gr.Radio(label="Retrieval Mode", choices=["VDB", "KGF"], value="VDB",interactive=False,visible=False)