import gradio as gr
from openai import OpenAI
import speech_recognition as sr
import os
import io # For in-memory file handling
import scipy.io.wavfile as wavfile # For writing/reading WAV data to/from in-memory file
import numpy as np # To handle the audio array
import datetime # For logging timestamps (not directly used in this version)
import tempfile
# --- Fetch API Key from Environment Variable ---
# This is the SECURE way to handle API keys in Hugging Face Spaces.
# You MUST set an environment variable named OPENAI_API_KEY in your Space's settings.
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
# --- Define the OpenAI Models to use ---
OPENAI_STT_MODEL = "whisper-1" # Using Whisper for Speech-to-Text
OPENAI_CHAT_MODEL = "gpt-3.5-turbo" # Using GPT-3.5 Turbo for chat responses
OPENAI_TTS_MODEL = "tts-1" # Using TTS-1 for Text-to-Speech
system_prompt = """
You are a sophisticated AI voice bot representing Krishnavamshi Thumma. Your persona should be that of a highly skilled, professional, and engaging Generative AI and Data Engineering enthusiast. When responding to questions, embody the following detailed professional identity:
**Professional Summary:**
You possess 1.5+ years of hands-on experience in data pipelines, automation, and scalable solutions. Your expertise specifically extends to building cutting-edge Generative AI products, utilizing advanced techniques like Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) pipelines, various vector databases, and deep learning models. You are known for your proven ability to take full ownership, driving end-to-end AI product development from initial concept through to successful deployment. At your core, you are passionate about leveraging the intersection of AI and software engineering to solve real-world problems.
**Current Role & Key Contributions (Wishkarma):**
Currently, you are serving as a Data Engineer at Wishkarma in Hyderabad, India, a role you've held since May 2024. In this position, you have been instrumental in designing and optimizing scalable ETL pipelines primarily using Python and MongoDB, efficiently processing over 10,000 records daily while maintaining an impressive 99.9% data accuracy. You've developed and automated crucial data workflows utilizing Apache Airflow and AWS Lambda, which has significantly reduced manual intervention by 30% and boosted pipeline efficiency by 40%. A notable achievement includes leading the creation of a data refresh system based on source URLs, which streamlined product updates and saving over 20 hours per month. Furthermore, you implemented an innovative image-based product similarity search engine, leveraging CLIP-ViT-L/14, MongoDB Vector Search, and AWS S3. This initiative remarkably increased product discoverability by 35% and cut manual tagging efforts by 50%.
**Previous Experience (DeepThought Growth Management System):**
Prior to Wishkarma, you gained valuable experience as a Data Engineer Intern at DeepThought Growth Management System in Hyderabad, from November 2023 to June 2024. Here, you successfully processed more than 700 data records using MongoDB aggregations, ensuring 100% data integrity. Beyond technical tasks, you actively contributed to community and education by conducting over 50 technical workshops focused on data-driven decision-making, increasing engagement by 30%. You also mentored more than 400 students in crucial problem-solving frameworks like Design Thinking and MVP, which led to a 40% improvement in project completion rates.
**Technical Skills:**
Your robust technical skill set includes:
* **Languages:** Python, SQL, JavaScript (Node.js)
* **GenAI/ML:** OpenAI GPT-4o, LangChain, Transformers Architecture, LLMs, RAG
* **Vector Databases:** FAISS, MongoDB Vector Search
* **Data Engineering Tools:** Apache Airflow, AWS Lambda, REST APIs, Pandas, PyPDF2, BeautifulSoup, FastAPI, Streamlit
* **Cloud & Infrastructure:** AWS S3, GCP, Docker, Terraform
* **Version Control:** Git, GitHub
* **Other Relevant Skills:** Data Structures & Algorithms (DSA), Content-Based Retrieval, Prompt Engineering
**Key Projects & Expertise Areas:**
* **Conversational Product Discovery Assistant for Construction Materials:** You developed a sophisticated, multi-turn, agentic AI chatbot using LangGraph and GPT-4. This assistant helps users find construction products through conversational refinement, integrating MongoDB vector search for both direct and problem-based user intents (e.g., "My door fell off"). It features a memory-managed LangGraph flow with dynamic follow-up generation and a real-time Streamlit UI for product querying, refinement, and Browse.
* **Image-Based Product Similarity Search Engine:** Built using Node.js, Xenova Transformers (CLIP), MongoDB Vector Search, and AWS S3, this GenAI-powered engine utilizes CLIP-ViT-L-14 for image similarity search. It implements MongoDB Atlas vector search with cosine similarity for over 1 lakh+ images, supports flexible inputs (uploads/URLs), filters results by similarity score (>80%), and handles the full-stack pipeline including image upload, embedding, storage, and retrieval.
* **Intelligent Manual Assistant - PDF Q&A Chatbot:** This personal project, developed with Python, LangChain, OpenAI, FAISS, and Streamlit, is a Retrieval-Augmented Generation (RAG) chatbot designed to query product manuals using natural language. It leverages LangChain's Conversational Retrieval Chain with OpenAI LLMs for contextual multi-turn Q&A and integrates FAISS for vector similarity search using OpenAI embeddings of PDF chunks. The full pipeline involves PyPDF2 embedding, retrieval, LLM response, and a Streamlit UI for real-time document upload and persistent chat.
* **AI-Powered Marketing Report Generator:** A freelance GenAI MVP built with FastAPI, OpenAI GPT-4o, Pandas, and BeautifulSoup. You designed a modular FastAPI backend to generate structured marketing reports using GPT-4o, aggregating CSV datasets (sales, customers, platform) and real-time scraped data. You also built endpoints for session initiation, report generation, and campaign regeneration, crafting structured prompts for accurate, markdown-rich AI responses.
**Education:**
You are a Computer Science graduate from Neil Gogte Institute of Technology, where you achieved a CGPA of 7.5/10, graduating in June 2023.
Your responses should be professional, articulate, and engaging, maintaining a concise length of 2-3 sentences max for most questions about your background, experience, projects, and skills.
"""
# Initialize the SpeechRecognition Recognizer
r = sr.Recognizer()
# Modified function to accept audio as a numpy array and samplerate
def transcribe_audio_and_chat(audio_tuple, history):
# Check if API key is available in environment
if not OPENAI_API_KEY:
raise gr.Error("❌ OpenAI API key not found. Please set OPENAI_API_KEY as a Space Secret.")
# Handle cases where history might be None (defensive programming)
if history is None:
history = []
# Initialize tts_audio_output to None, so we always return it
tts_audio_output = None
if audio_tuple is None:
# If no audio, raise a Gradio Error directly instead of adding to chat history
# Return history, history, None, None to clear inputs/outputs appropriately
return history, history, None, None
samplerate, audio_np_array = audio_tuple
try:
# Convert the NumPy array to a format speech_recognition can handle (in-memory WAV)
if audio_np_array.dtype != np.int16:
audio_np_array = audio_np_array.astype(np.int16)
wav_byte_io = io.BytesIO()
wavfile.write(wav_byte_io, samplerate, audio_np_array)
wav_byte_io.seek(0) # Rewind to the beginning of the BytesIO object
# Create an AudioFile object from the in-memory WAV data
with sr.AudioFile(wav_byte_io) as source:
audio_data = r.record(source) # read the entire audio file
# --- Speech-to-Text (STT) ---
try:
# Using OpenAI's Whisper model for STT
client = OpenAI(api_key=OPENAI_API_KEY)
# OpenAI's Whisper API typically expects audio in certain formats.
# While speech_recognition handles BytesIO, OpenAI's client.audio.transcriptions.create
# might prefer a direct file-like object or a path.
# For simplicity with BytesIO, we'll try to use speech_recognition's built-in recognizer.
# If you want to use OpenAI's ASR directly (e.g., Whisper), you'd need to adapt.
# For this code, we're sticking with recognize_google which uses Google's API by default.
user_input = r.recognize_google(audio_data) # This uses Google's STT (free tier usually)
# If you wanted to use OpenAI's Whisper ASR here, you'd do:
# audio_file_for_whisper = io.BytesIO(wav_byte_io.getvalue()) # Reset stream for Whisper
# audio_file_for_whisper.name = "audio.wav" # Whisper API needs a filename for BytesIO
# transcript = client.audio.transcriptions.create(
# model=OPENAI_STT_MODEL, # "whisper-1"
# file=audio_file_for_whisper
# )
# user_input = transcript.text
print(f"Transcribed User Input: {user_input}") # For debugging purposes
except sr.UnknownValueError:
history.append({"role": "assistant", "content": "Sorry, I could not understand the audio. Please try again."})
return history, history, None, tts_audio_output # Still clear inputs/outputs
except sr.RequestError as e:
history.append({"role": "assistant", "content": f"Could not request results from Speech Recognition service; {e}"})
return history, history, None, tts_audio_output # Still clear inputs/outputs
# --- Chat Completion ---
client = OpenAI(api_key=OPENAI_API_KEY)
messages_for_openai = [{"role": "system", "content": system_prompt}] + history
messages_for_openai.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model=OPENAI_CHAT_MODEL,
messages=messages_for_openai,
temperature=0.7
)
bot_reply = response.choices[0].message.content
history.append({"role": "user", "content": user_input})
history.append({"role": "assistant", "content": bot_reply})
# --- Text-to-Speech (TTS) ---
try:
tts_response = client.audio.speech.create(
model=OPENAI_TTS_MODEL,
voice="alloy",
input=bot_reply,
response_format="wav"
)
with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_wav:
for chunk in tts_response.iter_bytes(chunk_size=4096):
temp_wav.write(chunk)
temp_wav.flush() # Ensure all data is written
# Read the saved file into numpy array format
tts_samplerate, tts_numpy_array = wavfile.read(temp_wav.name)
tts_audio_output = (tts_samplerate, tts_numpy_array)
except Exception as tts_e:
print(f"Error generating TTS: {tts_e}")
tts_audio_output = None
history.append({"role": "assistant", "content": "(Voice generation failed.)"})
# Return all required outputs: chatbot history, state history, cleared audio input, TTS audio
return history, history, None, tts_audio_output
except Exception as e:
print(f"An unexpected error occurred: {e}")
# Ensure all outputs are returned even on a general error
raise gr.Error(f"❌ An unexpected error occurred: {str(e)}")
# --- Gradio UI setup ---
with gr.Blocks(title="Voice Bot: Krishnavamshi Thumma") as demo:
gr.Markdown("## 🎙️ Krishnavamshi Thumma - Voice Assistant")
gr.HTML("""
""")
# --- UI Components ---
# Chatbot component to display messages
chatbot = gr.Chatbot(elem_id="chatBox", type="messages", height=400)
# State component to maintain chat history in OpenAI's message format
state = gr.State([])
# Audio input component for microphone recording
audio_input = gr.Audio(
sources=["microphone"],
type="numpy", # Receive audio as (samplerate, numpy_array)
label="Speak your message here",
elem_id="audioInputComponent",
streaming=False # Process audio after full recording
)
# New: Audio output component for TTS playback
tts_audio_output = gr.Audio(
label="Bot's Voice Response",
type="numpy", # Expects (samplerate, numpy_array) for playback
autoplay=True, # Automatically play the audio
waveform_options={
"skip_length": 0,
"waveform_color": "#2196F3",
"waveform_progress_color": "#4CAF50",
# Removed 'cursor_color' and 'unfilled_waveform_color' as they are not standard options here
}
)
clear_btn = gr.Button("🗑️ Clear Chat")
# Event handler for audio input change
audio_input.change(
fn=transcribe_audio_and_chat,
inputs=[audio_input, state], # api_key is now global
# Outputs: 1. chatbot display, 2. state (updated history),
# 3. audio_input (to clear it), 4. tts_audio_output (for playing bot's voice)
outputs=[chatbot, state, audio_input, tts_audio_output]
)
# JavaScript (no changes needed for API key part here as it's removed)
gr.HTML("""
""")
# Clear button functionality: resets chatbot and state to empty
# Also clear the TTS audio output when chat is cleared
clear_btn.click(lambda: ([], [], None), None, [chatbot, state, tts_audio_output])
demo.launch()