File size: 15,087 Bytes
b2ef4a4
 
4860ff8
b5d9400
c346a4d
 
 
 
9126db3
43e7ef8
c346a4d
 
 
9126db3
c346a4d
 
 
 
 
43e7ef8
856d977
d04b8d1
 
 
 
5a55703
d04b8d1
 
 
 
43e7ef8
 
 
 
 
 
 
d04b8d1
 
 
 
 
 
 
 
856d977
b2ef4a4
b5d9400
 
 
c346a4d
5a55703
c346a4d
43e7ef8
c346a4d
59110a2
c346a4d
81a0c9e
 
 
c346a4d
 
5a55703
43e7ef8
c346a4d
 
 
43e7ef8
59110a2
b5d9400
b2ef4a4
c346a4d
59110a2
c346a4d
59110a2
c346a4d
 
 
b5d9400
c346a4d
 
 
9126db3
c346a4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5d9400
1f0a91c
 
b5d9400
c346a4d
5a55703
 
5ef46b2
b2ef4a4
b5d9400
c346a4d
 
1f0a91c
 
c346a4d
 
5a55703
 
9126db3
 
5a55703
c346a4d
5a55703
c346a4d
 
 
 
 
 
 
 
 
 
5a55703
c346a4d
 
 
5a55703
c346a4d
 
b5d9400
b2ef4a4
c346a4d
 
 
59110a2
c346a4d
 
edd0ac3
b5d9400
99d36a1
c346a4d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5d9400
 
c346a4d
b5d9400
c346a4d
 
5ef46b2
99d36a1
c346a4d
5a55703
 
c346a4d
 
 
 
 
 
 
 
5a55703
 
3b4c90e
b5d9400
c346a4d
b5d9400
59110a2
c346a4d
 
 
 
b5d9400
 
c346a4d
 
 
 
 
 
9126db3
c346a4d
 
 
9126db3
c346a4d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
import gradio as gr
from openai import OpenAI
import speech_recognition as sr
import os
import io # For in-memory file handling
import scipy.io.wavfile as wavfile # For writing/reading WAV data to/from in-memory file
import numpy as np # To handle the audio array
import datetime # For logging timestamps (not directly used in this version)
import tempfile

# --- Fetch API Key from Environment Variable ---
# This is the SECURE way to handle API keys in Hugging Face Spaces.
# You MUST set an environment variable named OPENAI_API_KEY in your Space's settings.
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

# --- Define the OpenAI Models to use ---
OPENAI_STT_MODEL = "whisper-1" # Using Whisper for Speech-to-Text
OPENAI_CHAT_MODEL = "gpt-3.5-turbo" # Using GPT-3.5 Turbo for chat responses
OPENAI_TTS_MODEL = "tts-1" # Using TTS-1 for Text-to-Speech

system_prompt = """
    You are a sophisticated AI voice bot representing Krishnavamshi Thumma. Your persona should be that of a highly skilled, professional, and engaging Generative AI and Data Engineering enthusiast. When responding to questions, embody the following detailed professional identity:
    **Professional Summary:**
    You possess 1.5+ years of hands-on experience in data pipelines, automation, and scalable solutions. Your expertise specifically extends to building cutting-edge Generative AI products, utilizing advanced techniques like Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) pipelines, various vector databases, and deep learning models. You are known for your proven ability to take full ownership, driving end-to-end AI product development from initial concept through to successful deployment. At your core, you are passionate about leveraging the intersection of AI and software engineering to solve real-world problems.
    **Current Role & Key Contributions (Wishkarma):**
    Currently, you are serving as a Data Engineer at Wishkarma in Hyderabad, India, a role you've held since May 2024. In this position, you have been instrumental in designing and optimizing scalable ETL pipelines primarily using Python and MongoDB, efficiently processing over 10,000 records daily while maintaining an impressive 99.9% data accuracy. You've developed and automated crucial data workflows utilizing Apache Airflow and AWS Lambda, which has significantly reduced manual intervention by 30% and boosted pipeline efficiency by 40%. A notable achievement includes leading the creation of a data refresh system based on source URLs, which streamlined product updates and saving over 20 hours per month. Furthermore, you implemented an innovative image-based product similarity search engine, leveraging CLIP-ViT-L/14, MongoDB Vector Search, and AWS S3. This initiative remarkably increased product discoverability by 35% and cut manual tagging efforts by 50%.
    **Previous Experience (DeepThought Growth Management System):**
    Prior to Wishkarma, you gained valuable experience as a Data Engineer Intern at DeepThought Growth Management System in Hyderabad, from November 2023 to June 2024. Here, you successfully processed more than 700 data records using MongoDB aggregations, ensuring 100% data integrity. Beyond technical tasks, you actively contributed to community and education by conducting over 50 technical workshops focused on data-driven decision-making, increasing engagement by 30%. You also mentored more than 400 students in crucial problem-solving frameworks like Design Thinking and MVP, which led to a 40% improvement in project completion rates.
    **Technical Skills:**
    Your robust technical skill set includes:
    * **Languages:** Python, SQL, JavaScript (Node.js)
    * **GenAI/ML:** OpenAI GPT-4o, LangChain, Transformers Architecture, LLMs, RAG
    * **Vector Databases:** FAISS, MongoDB Vector Search
    * **Data Engineering Tools:** Apache Airflow, AWS Lambda, REST APIs, Pandas, PyPDF2, BeautifulSoup, FastAPI, Streamlit
    * **Cloud & Infrastructure:** AWS S3, GCP, Docker, Terraform
    * **Version Control:** Git, GitHub
    * **Other Relevant Skills:** Data Structures & Algorithms (DSA), Content-Based Retrieval, Prompt Engineering
    **Key Projects & Expertise Areas:**
    * **Conversational Product Discovery Assistant for Construction Materials:** You developed a sophisticated, multi-turn, agentic AI chatbot using LangGraph and GPT-4. This assistant helps users find construction products through conversational refinement, integrating MongoDB vector search for both direct and problem-based user intents (e.g., "My door fell off"). It features a memory-managed LangGraph flow with dynamic follow-up generation and a real-time Streamlit UI for product querying, refinement, and Browse.
    * **Image-Based Product Similarity Search Engine:** Built using Node.js, Xenova Transformers (CLIP), MongoDB Vector Search, and AWS S3, this GenAI-powered engine utilizes CLIP-ViT-L-14 for image similarity search. It implements MongoDB Atlas vector search with cosine similarity for over 1 lakh+ images, supports flexible inputs (uploads/URLs), filters results by similarity score (>80%), and handles the full-stack pipeline including image upload, embedding, storage, and retrieval.
    * **Intelligent Manual Assistant - PDF Q&A Chatbot:** This personal project, developed with Python, LangChain, OpenAI, FAISS, and Streamlit, is a Retrieval-Augmented Generation (RAG) chatbot designed to query product manuals using natural language. It leverages LangChain's Conversational Retrieval Chain with OpenAI LLMs for contextual multi-turn Q&A and integrates FAISS for vector similarity search using OpenAI embeddings of PDF chunks. The full pipeline involves PyPDF2 embedding, retrieval, LLM response, and a Streamlit UI for real-time document upload and persistent chat.
    * **AI-Powered Marketing Report Generator:** A freelance GenAI MVP built with FastAPI, OpenAI GPT-4o, Pandas, and BeautifulSoup. You designed a modular FastAPI backend to generate structured marketing reports using GPT-4o, aggregating CSV datasets (sales, customers, platform) and real-time scraped data. You also built endpoints for session initiation, report generation, and campaign regeneration, crafting structured prompts for accurate, markdown-rich AI responses.
    **Education:**
    You are a Computer Science graduate from Neil Gogte Institute of Technology, where you achieved a CGPA of 7.5/10, graduating in June 2023.
    Your responses should be professional, articulate, and engaging, maintaining a concise length of 2-3 sentences max for most questions about your background, experience, projects, and skills.
    """

# Initialize the SpeechRecognition Recognizer
r = sr.Recognizer()

# Modified function to accept audio as a numpy array and samplerate
def transcribe_audio_and_chat(audio_tuple, history):
    # Check if API key is available in environment
    if not OPENAI_API_KEY:
        raise gr.Error("❌ OpenAI API key not found. Please set OPENAI_API_KEY as a Space Secret.")

    # Handle cases where history might be None (defensive programming)
    if history is None:
        history = []

    # Initialize tts_audio_output to None, so we always return it
    tts_audio_output = None

    if audio_tuple is None:
        # If no audio, raise a Gradio Error directly instead of adding to chat history
        # Return history, history, None, None to clear inputs/outputs appropriately
        return history, history, None, None 

    samplerate, audio_np_array = audio_tuple

    try:
        # Convert the NumPy array to a format speech_recognition can handle (in-memory WAV)
        if audio_np_array.dtype != np.int16:
             audio_np_array = audio_np_array.astype(np.int16)

        wav_byte_io = io.BytesIO()
        wavfile.write(wav_byte_io, samplerate, audio_np_array)
        wav_byte_io.seek(0) # Rewind to the beginning of the BytesIO object

        # Create an AudioFile object from the in-memory WAV data
        with sr.AudioFile(wav_byte_io) as source:
            audio_data = r.record(source) # read the entire audio file

        # --- Speech-to-Text (STT) ---
        try:
            # Using OpenAI's Whisper model for STT
            client = OpenAI(api_key=OPENAI_API_KEY)
            # OpenAI's Whisper API typically expects audio in certain formats.
            # While speech_recognition handles BytesIO, OpenAI's client.audio.transcriptions.create
            # might prefer a direct file-like object or a path.
            # For simplicity with BytesIO, we'll try to use speech_recognition's built-in recognizer.
            # If you want to use OpenAI's ASR directly (e.g., Whisper), you'd need to adapt.
            # For this code, we're sticking with recognize_google which uses Google's API by default.
            user_input = r.recognize_google(audio_data) # This uses Google's STT (free tier usually)
            
            # If you wanted to use OpenAI's Whisper ASR here, you'd do:
            # audio_file_for_whisper = io.BytesIO(wav_byte_io.getvalue()) # Reset stream for Whisper
            # audio_file_for_whisper.name = "audio.wav" # Whisper API needs a filename for BytesIO
            # transcript = client.audio.transcriptions.create(
            #     model=OPENAI_STT_MODEL, # "whisper-1"
            #     file=audio_file_for_whisper
            # )
            # user_input = transcript.text
            
            print(f"Transcribed User Input: {user_input}") # For debugging purposes

        except sr.UnknownValueError:
            history.append({"role": "assistant", "content": "Sorry, I could not understand the audio. Please try again."})
            return history, history, None, tts_audio_output # Still clear inputs/outputs
        except sr.RequestError as e:
            history.append({"role": "assistant", "content": f"Could not request results from Speech Recognition service; {e}"})
            return history, history, None, tts_audio_output # Still clear inputs/outputs

        # --- Chat Completion ---
        client = OpenAI(api_key=OPENAI_API_KEY)

        messages_for_openai = [{"role": "system", "content": system_prompt}] + history
        messages_for_openai.append({"role": "user", "content": user_input})

        response = client.chat.completions.create(
            model=OPENAI_CHAT_MODEL,
            messages=messages_for_openai,
            temperature=0.7
        )

        bot_reply = response.choices[0].message.content
        
        history.append({"role": "user", "content": user_input})
        history.append({"role": "assistant", "content": bot_reply})
        
        # --- Text-to-Speech (TTS) ---
        try:
            tts_response = client.audio.speech.create(
                model=OPENAI_TTS_MODEL,
                voice="alloy",
                input=bot_reply,
                response_format="wav"
            )
        
            with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_wav:
                for chunk in tts_response.iter_bytes(chunk_size=4096):
                    temp_wav.write(chunk)
                temp_wav.flush()  # Ensure all data is written
        
                # Read the saved file into numpy array format
                tts_samplerate, tts_numpy_array = wavfile.read(temp_wav.name)
                tts_audio_output = (tts_samplerate, tts_numpy_array)
        
        except Exception as tts_e:
            print(f"Error generating TTS: {tts_e}")
            tts_audio_output = None
            history.append({"role": "assistant", "content": "(Voice generation failed.)"})

        # Return all required outputs: chatbot history, state history, cleared audio input, TTS audio
        return history, history, None, tts_audio_output 

    except Exception as e:
        print(f"An unexpected error occurred: {e}")
        # Ensure all outputs are returned even on a general error
        raise gr.Error(f"❌ An unexpected error occurred: {str(e)}")


# --- Gradio UI setup ---
with gr.Blocks(title="Voice Bot: Krishnavamshi Thumma") as demo:
    gr.Markdown("## πŸŽ™οΈ Krishnavamshi Thumma - Voice Assistant")

    gr.HTML("""
    <style>
        #chatBox {
            height: 60vh;
            overflow-y: auto;
            padding: 20px;
            border-radius: 10px;
            background: #f9f9f9;
            margin-bottom: 20px;
        }
        .message {
            margin: 10px 0;
            padding: 12px;
            border-radius: 8px;
        }
        .user {
            background: #e3f2fd;
            text-align: right;
        }
        .bot {
            background: #f5f5f5;
        }
        #audioInputComponent {
            margin-top: 20px;
        }
        .key-status { /* Not strictly needed anymore but keeping for style consistency if other status messages arise */
            padding: 5px;
            margin-top: 5px;
            border-radius: 4px;
        }
        .success {
            background: #d4edda;
            color: #155724;
        }
        .error {
            background: #f8d7da;
            color: #721c24;
        }
    </style>
    """)

    # --- UI Components ---
    # Chatbot component to display messages
    chatbot = gr.Chatbot(elem_id="chatBox", type="messages", height=400)
    # State component to maintain chat history in OpenAI's message format
    state = gr.State([]) 

    # Audio input component for microphone recording
    audio_input = gr.Audio(
        sources=["microphone"],
        type="numpy", # Receive audio as (samplerate, numpy_array)
        label="Speak your message here",
        elem_id="audioInputComponent",
        streaming=False # Process audio after full recording
    )

    # New: Audio output component for TTS playback
    tts_audio_output = gr.Audio(
        label="Bot's Voice Response",
        type="numpy", # Expects (samplerate, numpy_array) for playback
        autoplay=True, # Automatically play the audio
        waveform_options={
            "skip_length": 0, 
            "waveform_color": "#2196F3", 
            "waveform_progress_color": "#4CAF50", 
            # Removed 'cursor_color' and 'unfilled_waveform_color' as they are not standard options here
        }
    )

    clear_btn = gr.Button("πŸ—‘οΈ Clear Chat")

    # Event handler for audio input change
    audio_input.change(
        fn=transcribe_audio_and_chat,
        inputs=[audio_input, state], # api_key is now global
        # Outputs: 1. chatbot display, 2. state (updated history),
        # 3. audio_input (to clear it), 4. tts_audio_output (for playing bot's voice)
        outputs=[chatbot, state, audio_input, tts_audio_output] 
    )

    # JavaScript (no changes needed for API key part here as it's removed)
    gr.HTML("""
    <script>
        // You can add other useful JS here if needed in the future
    </script>
    """)

    # Clear button functionality: resets chatbot and state to empty
    # Also clear the TTS audio output when chat is cleared
    clear_btn.click(lambda: ([], [], None), None, [chatbot, state, tts_audio_output])

demo.launch()