Spaces:
No application file
No application file
Liss, Alex (NYC-HUG)
commited on
Commit
·
f4e73a3
1
Parent(s):
329dfec
stage 1 of refactoring to gradio
Browse files- .github/workflows/update_space.yml +28 -0
- .gradio/certificate.pem +31 -0
- GRADIO_README.md +74 -0
- README.md +6 -0
- agent.py +1 -1
- app.py +1 -4
- gradio_app.py +228 -0
- gradio_graph.py +62 -0
- gradio_llm.py +43 -0
- gradio_requirements.txt +12 -0
- gradio_utils.py +53 -0
.github/workflows/update_space.yml
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Run Python script
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
|
8 |
+
jobs:
|
9 |
+
build:
|
10 |
+
runs-on: ubuntu-latest
|
11 |
+
|
12 |
+
steps:
|
13 |
+
- name: Checkout
|
14 |
+
uses: actions/checkout@v2
|
15 |
+
|
16 |
+
- name: Set up Python
|
17 |
+
uses: actions/setup-python@v2
|
18 |
+
with:
|
19 |
+
python-version: '3.9'
|
20 |
+
|
21 |
+
- name: Install Gradio
|
22 |
+
run: python -m pip install gradio
|
23 |
+
|
24 |
+
- name: Log in to Hugging Face
|
25 |
+
run: python -c 'import huggingface_hub; huggingface_hub.login(token="${{ secrets.hf_token }}")'
|
26 |
+
|
27 |
+
- name: Deploy to Spaces
|
28 |
+
run: gradio deploy
|
.gradio/certificate.pem
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
-----BEGIN CERTIFICATE-----
|
2 |
+
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
|
3 |
+
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
|
4 |
+
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
|
5 |
+
WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
|
6 |
+
ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
|
7 |
+
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
|
8 |
+
h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
|
9 |
+
0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
|
10 |
+
A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
|
11 |
+
T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
|
12 |
+
B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
|
13 |
+
B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
|
14 |
+
KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
|
15 |
+
OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
|
16 |
+
jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
|
17 |
+
qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
|
18 |
+
rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
|
19 |
+
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
|
20 |
+
hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
|
21 |
+
ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
|
22 |
+
3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
|
23 |
+
NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
|
24 |
+
ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
|
25 |
+
TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
|
26 |
+
jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
|
27 |
+
oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
|
28 |
+
4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
|
29 |
+
mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
|
30 |
+
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
|
31 |
+
-----END CERTIFICATE-----
|
GRADIO_README.md
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 49ers FanAI Hub - Gradio Version
|
2 |
+
|
3 |
+
This is the Gradio version of the 49ers FanAI Hub, a conversational AI application that provides information about the San Francisco 49ers football team, players, games, and fan communities.
|
4 |
+
|
5 |
+
## Overview
|
6 |
+
|
7 |
+
This application uses:
|
8 |
+
- **Gradio** for the web interface
|
9 |
+
- **LangChain** for the agent framework
|
10 |
+
- **Neo4j** for the graph database
|
11 |
+
- **OpenAI** for the language model
|
12 |
+
- **Zep** for memory management
|
13 |
+
|
14 |
+
## Files
|
15 |
+
|
16 |
+
- `gradio_app.py` - Main application file with Gradio UI
|
17 |
+
- `gradio_utils.py` - Utility functions for the Gradio app
|
18 |
+
- `gradio_graph.py` - Neo4j graph connection without Streamlit dependencies
|
19 |
+
- `gradio_llm.py` - Language model initialization without Streamlit dependencies
|
20 |
+
- `gradio_requirements.txt` - Dependencies for the Gradio version
|
21 |
+
|
22 |
+
## Setup
|
23 |
+
|
24 |
+
1. Install the required dependencies:
|
25 |
+
```
|
26 |
+
pip install -r gradio_requirements.txt
|
27 |
+
```
|
28 |
+
|
29 |
+
2. Set up environment variables (create a `.env` file or set them in your environment):
|
30 |
+
```
|
31 |
+
# OpenAI API
|
32 |
+
OPENAI_API_KEY=your_openai_api_key
|
33 |
+
OPENAI_MODEL=gpt-4-turbo # or another model
|
34 |
+
|
35 |
+
# Neo4j Database
|
36 |
+
NEO4J_URI=your_neo4j_uri
|
37 |
+
NEO4J_USERNAME=your_neo4j_username
|
38 |
+
NEO4J_PASSWORD=your_neo4j_password
|
39 |
+
|
40 |
+
# Zep Memory (optional)
|
41 |
+
ZEP_API_KEY=your_zep_api_key
|
42 |
+
```
|
43 |
+
|
44 |
+
3. Run the application:
|
45 |
+
```
|
46 |
+
python gradio_app.py
|
47 |
+
```
|
48 |
+
|
49 |
+
4. Open your browser and navigate to the URL shown in the terminal (typically http://127.0.0.1:7860)
|
50 |
+
|
51 |
+
## Features
|
52 |
+
|
53 |
+
- Chat interface for asking questions about the 49ers
|
54 |
+
- Integration with Neo4j graph database for structured data queries
|
55 |
+
- Memory management with Zep for conversation history
|
56 |
+
- Support for various query types:
|
57 |
+
- Information about players, games, and fans
|
58 |
+
- Finding games based on descriptions
|
59 |
+
- Discovering connections between people in the 49ers industry
|
60 |
+
|
61 |
+
## Differences from Streamlit Version
|
62 |
+
|
63 |
+
This Gradio version provides the same functionality as the Streamlit version but with a different UI framework. Key differences include:
|
64 |
+
|
65 |
+
1. **State Management**: Uses a custom AppState class instead of Streamlit's session_state
|
66 |
+
2. **UI Components**: Uses Gradio's chat interface instead of Streamlit's chat components
|
67 |
+
3. **Error Handling**: Uses console logging instead of Streamlit's error display
|
68 |
+
4. **Session Management**: Uses global variables for session IDs instead of Streamlit's session context
|
69 |
+
|
70 |
+
## Troubleshooting
|
71 |
+
|
72 |
+
- If you encounter connection issues with Neo4j, check your database credentials and network connectivity
|
73 |
+
- If the OpenAI API is not working, verify your API key and model name
|
74 |
+
- For Zep memory issues, ensure the Zep service is running and your API key is correct
|
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# 49ers FanAI Hub - Streamlit Version
|
2 |
|
3 |
This is a Streamlit-based chatbot application that provides information about the San Francisco 49ers, players, games, and fans. The application uses LangChain, Neo4j, and Zep for memory management.
|
|
|
1 |
+
---
|
2 |
+
title: ifx-sandbox
|
3 |
+
app_file: gradio_app.py
|
4 |
+
sdk: gradio
|
5 |
+
sdk_version: 5.24.0
|
6 |
+
---
|
7 |
# 49ers FanAI Hub - Streamlit Version
|
8 |
|
9 |
This is a Streamlit-based chatbot application that provides information about the San Francisco 49ers, players, games, and fans. The application uses LangChain, Neo4j, and Zep for memory management.
|
agent.py
CHANGED
@@ -103,7 +103,7 @@ agent_executor = AgentExecutor(
|
|
103 |
tools=tools,
|
104 |
verbose=True,
|
105 |
handle_parsing_errors=True,
|
106 |
-
max_iterations=
|
107 |
)
|
108 |
|
109 |
# Create a chat agent with memory
|
|
|
103 |
tools=tools,
|
104 |
verbose=True,
|
105 |
handle_parsing_errors=True,
|
106 |
+
max_iterations=5 # Limit the number of iterations to prevent infinite loops
|
107 |
)
|
108 |
|
109 |
# Create a chat agent with memory
|
app.py
CHANGED
@@ -1,7 +1,4 @@
|
|
1 |
-
|
2 |
-
Movie Chatbot powered by Langchain, Neo4j, Streamlit, and Zep.
|
3 |
-
"""
|
4 |
-
|
5 |
import os
|
6 |
import uuid
|
7 |
import streamlit as st
|
|
|
1 |
+
|
|
|
|
|
|
|
2 |
import os
|
3 |
import uuid
|
4 |
import streamlit as st
|
gradio_app.py
ADDED
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import uuid
|
3 |
+
import asyncio
|
4 |
+
import gradio as gr
|
5 |
+
from zep_cloud.client import AsyncZep
|
6 |
+
from zep_cloud.types import Message
|
7 |
+
|
8 |
+
# Import our components
|
9 |
+
# We need to modify the agent import to use our Gradio-compatible modules
|
10 |
+
# But we can't modify agent.py directly, so we'll import it and patch it
|
11 |
+
import agent
|
12 |
+
from gradio_graph import graph
|
13 |
+
import gradio_utils
|
14 |
+
|
15 |
+
# Patch the agent module to use our Gradio-compatible modules
|
16 |
+
import sys
|
17 |
+
import importlib
|
18 |
+
sys.modules['graph'] = importlib.import_module('gradio_graph')
|
19 |
+
sys.modules['llm'] = importlib.import_module('gradio_llm')
|
20 |
+
|
21 |
+
# Now we can safely import generate_response
|
22 |
+
from agent import generate_response
|
23 |
+
|
24 |
+
# Initialize Zep client
|
25 |
+
zep_api_key = os.environ.get("ZEP_API_KEY")
|
26 |
+
if not zep_api_key:
|
27 |
+
print("ZEP_API_KEY environment variable is not set. Memory features will be disabled.")
|
28 |
+
zep = None
|
29 |
+
else:
|
30 |
+
zep = AsyncZep(api_key=zep_api_key)
|
31 |
+
|
32 |
+
# Global state management (replacing Streamlit's session_state)
|
33 |
+
class AppState:
|
34 |
+
def __init__(self):
|
35 |
+
self.messages = []
|
36 |
+
self.initialized = False
|
37 |
+
self.user_id = None
|
38 |
+
self.session_id = None
|
39 |
+
|
40 |
+
# Create a global state instance
|
41 |
+
state = AppState()
|
42 |
+
|
43 |
+
# Add welcome message to state
|
44 |
+
welcome_message = """
|
45 |
+
# 🏈 Welcome to the 49ers FanAI Hub!
|
46 |
+
|
47 |
+
I can help you with:
|
48 |
+
- Information about the 49ers, players, and fans
|
49 |
+
- Finding 49ers games based on plot descriptions or themes
|
50 |
+
- Discovering connections between people in the 49ers industry
|
51 |
+
|
52 |
+
What would you like to know about today?
|
53 |
+
"""
|
54 |
+
|
55 |
+
# Function to initialize the chat session
|
56 |
+
async def initialize_chat():
|
57 |
+
"""Set up the chat session when a user connects"""
|
58 |
+
try:
|
59 |
+
# Generate unique identifiers for the user and session
|
60 |
+
state.user_id = gradio_utils.get_user_id()
|
61 |
+
state.session_id = gradio_utils.get_session_id()
|
62 |
+
|
63 |
+
print(f"Starting new chat session. User ID: {state.user_id}, Session ID: {state.session_id}")
|
64 |
+
|
65 |
+
# Register user in Zep if available
|
66 |
+
if zep:
|
67 |
+
await zep.user.add(
|
68 |
+
user_id=state.user_id,
|
69 |
+
email="[email protected]",
|
70 |
+
first_name="User",
|
71 |
+
last_name="MovieFan",
|
72 |
+
)
|
73 |
+
|
74 |
+
# Start a new session in Zep
|
75 |
+
await zep.memory.add_session(
|
76 |
+
session_id=state.session_id,
|
77 |
+
user_id=state.user_id,
|
78 |
+
)
|
79 |
+
|
80 |
+
|
81 |
+
state.messages.append({"role": "assistant", "content": welcome_message})
|
82 |
+
state.initialized = True
|
83 |
+
|
84 |
+
return welcome_message
|
85 |
+
|
86 |
+
except Exception as e:
|
87 |
+
import traceback
|
88 |
+
print(f"Error in initialize_chat: {str(e)}")
|
89 |
+
print(f"Traceback: {traceback.format_exc()}")
|
90 |
+
error_message = "There was an error starting the chat. Please refresh the page and try again."
|
91 |
+
state.messages.append({"role": "system", "content": error_message})
|
92 |
+
return error_message
|
93 |
+
|
94 |
+
# Function to process user messages
|
95 |
+
async def process_message(message):
|
96 |
+
"""Process user messages and generate responses with the agent"""
|
97 |
+
print("Starting message processing...")
|
98 |
+
|
99 |
+
try:
|
100 |
+
# Store user message in Zep memory if available
|
101 |
+
if zep:
|
102 |
+
print("Storing user message in Zep...")
|
103 |
+
await zep.memory.add(
|
104 |
+
session_id=state.session_id,
|
105 |
+
messages=[Message(role_type="user", content=message, role="user")]
|
106 |
+
)
|
107 |
+
|
108 |
+
# Add user message to state
|
109 |
+
state.messages.append({"role": "user", "content": message})
|
110 |
+
|
111 |
+
# Process with the agent
|
112 |
+
print('Calling generate_response function...')
|
113 |
+
agent_response = generate_response(message, state.session_id)
|
114 |
+
print(f"Agent response received: {agent_response}")
|
115 |
+
|
116 |
+
# Extract the output and metadata
|
117 |
+
output = agent_response.get("output", "")
|
118 |
+
metadata = agent_response.get("metadata", {})
|
119 |
+
print(f"Extracted output: {output}")
|
120 |
+
print(f"Extracted metadata: {metadata}")
|
121 |
+
|
122 |
+
# Add assistant response to state
|
123 |
+
state.messages.append({"role": "assistant", "content": output})
|
124 |
+
|
125 |
+
# Store assistant's response in Zep memory if available
|
126 |
+
if zep:
|
127 |
+
print("Storing assistant response in Zep...")
|
128 |
+
await zep.memory.add(
|
129 |
+
session_id=state.session_id,
|
130 |
+
messages=[Message(role_type="assistant", content=output, role="assistant")]
|
131 |
+
)
|
132 |
+
print("Assistant response stored in Zep")
|
133 |
+
|
134 |
+
return output
|
135 |
+
|
136 |
+
except Exception as e:
|
137 |
+
import traceback
|
138 |
+
print(f"Error in process_message: {str(e)}")
|
139 |
+
print(f"Traceback: {traceback.format_exc()}")
|
140 |
+
error_message = "I apologize, but I encountered an error. Could you please try again?"
|
141 |
+
state.messages.append({"role": "assistant", "content": error_message})
|
142 |
+
return error_message
|
143 |
+
|
144 |
+
# Function to handle user input in Gradio
|
145 |
+
def user_input(message, history):
|
146 |
+
"""Handle user input and update chat history"""
|
147 |
+
# Return immediately to update the UI with user message
|
148 |
+
history.append({"role": "user", "content": message})
|
149 |
+
return "", history
|
150 |
+
|
151 |
+
# Function to generate bot response in Gradio
|
152 |
+
def bot_response(history):
|
153 |
+
"""Generate bot response and update chat history"""
|
154 |
+
# Get the last user message
|
155 |
+
user_message = history[-1]["content"]
|
156 |
+
|
157 |
+
# Process the message using asyncio.run
|
158 |
+
response = asyncio.run(process_message(user_message))
|
159 |
+
|
160 |
+
# Add the assistant's response to history
|
161 |
+
history.append({"role": "assistant", "content": response})
|
162 |
+
|
163 |
+
return history
|
164 |
+
|
165 |
+
# Function to initialize the chat when the app starts
|
166 |
+
#async def on_app_start():
|
167 |
+
"""Initialize the chat when the app starts"""
|
168 |
+
#if not state.initialized:
|
169 |
+
# welcome_message = await initialize_chat()
|
170 |
+
# return [{"role": "assistant", "content": welcome_message}]
|
171 |
+
#return []
|
172 |
+
|
173 |
+
# Initialize the chat before creating the interface
|
174 |
+
#initial_messages = asyncio.run(on_app_start())
|
175 |
+
initial_messages = [{"role": "assistant", "content": welcome_message}]
|
176 |
+
|
177 |
+
# Create the Gradio interface
|
178 |
+
with gr.Blocks(title="49ers FanAI Hub", theme=gr.themes.Soft()) as demo:
|
179 |
+
gr.Markdown("# 🏈 49ers FanAI Hub")
|
180 |
+
|
181 |
+
# Chat interface
|
182 |
+
chatbot = gr.Chatbot(
|
183 |
+
value=initial_messages,
|
184 |
+
height=500,
|
185 |
+
show_label=False,
|
186 |
+
elem_id="chatbot",
|
187 |
+
type="messages" # Use the new messages format
|
188 |
+
)
|
189 |
+
|
190 |
+
# Input components
|
191 |
+
with gr.Row():
|
192 |
+
msg = gr.Textbox(
|
193 |
+
placeholder="Ask me about the 49ers...",
|
194 |
+
show_label=False,
|
195 |
+
scale=9
|
196 |
+
)
|
197 |
+
submit = gr.Button("Send", scale=1)
|
198 |
+
|
199 |
+
# Define a combined function for user input and bot response
|
200 |
+
async def process_and_respond(message, history):
|
201 |
+
# If not initialized yet, do it now
|
202 |
+
if not state.initialized:
|
203 |
+
welcome_message = await initialize_chat()
|
204 |
+
# Optionally show the welcome message right away
|
205 |
+
history.append({"role": "assistant", "content": welcome_message})
|
206 |
+
|
207 |
+
# Now handle the actual user message
|
208 |
+
history.append({"role": "user", "content": message})
|
209 |
+
response = await process_message(message)
|
210 |
+
history.append({"role": "assistant", "content": response})
|
211 |
+
|
212 |
+
return "", history
|
213 |
+
|
214 |
+
|
215 |
+
|
216 |
+
# Set up event handlers with the combined function - explicitly disable queue
|
217 |
+
msg.submit(process_and_respond, [msg, chatbot], [msg, chatbot], queue=False)
|
218 |
+
submit.click(process_and_respond, [msg, chatbot], [msg, chatbot], queue=False)
|
219 |
+
|
220 |
+
# Add a clear button
|
221 |
+
clear = gr.Button("Clear Conversation")
|
222 |
+
clear.click(lambda: [], None, chatbot, queue=False)
|
223 |
+
|
224 |
+
# Launch the app
|
225 |
+
if __name__ == "__main__":
|
226 |
+
# Disable the queue completely
|
227 |
+
#demo.queue(enabled=False)
|
228 |
+
demo.launch(share=True)
|
gradio_graph.py
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
This module initializes the Neo4j graph connection without Streamlit dependencies.
|
3 |
+
"""
|
4 |
+
|
5 |
+
import os
|
6 |
+
from dotenv import load_dotenv
|
7 |
+
from langchain_neo4j import Neo4jGraph
|
8 |
+
|
9 |
+
# Load environment variables
|
10 |
+
load_dotenv()
|
11 |
+
|
12 |
+
# Get Neo4j credentials from environment
|
13 |
+
def get_credential(key_name):
|
14 |
+
"""Get credential from environment variables"""
|
15 |
+
# Try different possible environment variable names
|
16 |
+
possible_names = [key_name]
|
17 |
+
|
18 |
+
# Add alternative names
|
19 |
+
if key_name.startswith("AURA_"):
|
20 |
+
possible_names.append(f"NEO4J_{key_name[5:]}")
|
21 |
+
elif key_name.startswith("NEO4J_"):
|
22 |
+
possible_names.append(f"AURA_{key_name[6:]}")
|
23 |
+
|
24 |
+
# Try each possible name
|
25 |
+
for name in possible_names:
|
26 |
+
value = os.environ.get(name)
|
27 |
+
if value:
|
28 |
+
return value
|
29 |
+
|
30 |
+
return None
|
31 |
+
|
32 |
+
# Get Neo4j credentials
|
33 |
+
AURA_CONNECTION_URI = get_credential("AURA_CONNECTION_URI") or get_credential("NEO4J_URI")
|
34 |
+
AURA_USERNAME = get_credential("AURA_USERNAME") or get_credential("NEO4J_USERNAME")
|
35 |
+
AURA_PASSWORD = get_credential("AURA_PASSWORD") or get_credential("NEO4J_PASSWORD")
|
36 |
+
|
37 |
+
# Check if credentials are available
|
38 |
+
if not all([AURA_CONNECTION_URI, AURA_USERNAME, AURA_PASSWORD]):
|
39 |
+
missing = []
|
40 |
+
if not AURA_CONNECTION_URI:
|
41 |
+
missing.append("AURA_CONNECTION_URI/NEO4J_URI")
|
42 |
+
if not AURA_USERNAME:
|
43 |
+
missing.append("AURA_USERNAME/NEO4J_USERNAME")
|
44 |
+
if not AURA_PASSWORD:
|
45 |
+
missing.append("AURA_PASSWORD/NEO4J_PASSWORD")
|
46 |
+
|
47 |
+
error_message = f"Missing Neo4j credentials: {', '.join(missing)}"
|
48 |
+
print(f"ERROR: {error_message}")
|
49 |
+
raise ValueError(error_message)
|
50 |
+
|
51 |
+
# Connect to Neo4j
|
52 |
+
try:
|
53 |
+
graph = Neo4jGraph(
|
54 |
+
url=AURA_CONNECTION_URI,
|
55 |
+
username=AURA_USERNAME,
|
56 |
+
password=AURA_PASSWORD,
|
57 |
+
)
|
58 |
+
print("Successfully connected to Neo4j database")
|
59 |
+
except Exception as e:
|
60 |
+
error_message = f"Failed to connect to Neo4j: {str(e)}"
|
61 |
+
print(f"ERROR: {error_message}")
|
62 |
+
raise Exception(error_message)
|
gradio_llm.py
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
This module initializes the language model and embedding model without Streamlit dependencies.
|
3 |
+
"""
|
4 |
+
|
5 |
+
import os
|
6 |
+
from dotenv import load_dotenv
|
7 |
+
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
|
8 |
+
|
9 |
+
# Load environment variables
|
10 |
+
load_dotenv()
|
11 |
+
|
12 |
+
# Get API keys from environment
|
13 |
+
def get_api_key(key_name):
|
14 |
+
"""Get API key from environment variables"""
|
15 |
+
return os.environ.get(key_name)
|
16 |
+
|
17 |
+
OPENAI_API_KEY = get_api_key("OPENAI_API_KEY")
|
18 |
+
OPENAI_MODEL = get_api_key("OPENAI_MODEL") or "gpt-4-turbo"
|
19 |
+
|
20 |
+
if not OPENAI_API_KEY:
|
21 |
+
error_message = "OPENAI_API_KEY is not set in environment variables."
|
22 |
+
print(f"ERROR: {error_message}")
|
23 |
+
raise ValueError(error_message)
|
24 |
+
|
25 |
+
# Create the LLM with better error handling
|
26 |
+
try:
|
27 |
+
llm = ChatOpenAI(
|
28 |
+
openai_api_key=OPENAI_API_KEY,
|
29 |
+
model=OPENAI_MODEL,
|
30 |
+
temperature=0.1,
|
31 |
+
streaming=True # Enable streaming for better response handling
|
32 |
+
)
|
33 |
+
|
34 |
+
# Create the Embedding model
|
35 |
+
embeddings = OpenAIEmbeddings(
|
36 |
+
openai_api_key=OPENAI_API_KEY
|
37 |
+
)
|
38 |
+
|
39 |
+
print(f"Successfully initialized OpenAI models (using {OPENAI_MODEL})")
|
40 |
+
except Exception as e:
|
41 |
+
error_message = f"Failed to initialize OpenAI models: {str(e)}"
|
42 |
+
print(f"ERROR: {error_message}")
|
43 |
+
raise Exception(error_message)
|
gradio_requirements.txt
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
gradio>=4.0.0
|
2 |
+
langchain>=0.1.0
|
3 |
+
langchain-openai>=0.0.5
|
4 |
+
langchain-core>=0.1.15
|
5 |
+
langchain-neo4j>=0.1.1
|
6 |
+
openai>=1.2.0
|
7 |
+
neo4j>=5.14.0
|
8 |
+
python-dotenv>=1.0.0
|
9 |
+
uuid>=1.30
|
10 |
+
zep-cloud>=0.1.0
|
11 |
+
asyncio>=3.4.3
|
12 |
+
pandas>=2.0.0
|
gradio_utils.py
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Utility functions for the Gradio-based chatbot application.
|
3 |
+
"""
|
4 |
+
|
5 |
+
import uuid
|
6 |
+
|
7 |
+
# Global state for session and user IDs
|
8 |
+
_session_id = None
|
9 |
+
_user_id = None
|
10 |
+
|
11 |
+
def get_session_id():
|
12 |
+
"""
|
13 |
+
Get the current session ID.
|
14 |
+
Creates a new ID if one doesn't exist.
|
15 |
+
"""
|
16 |
+
global _session_id
|
17 |
+
if _session_id is None:
|
18 |
+
_session_id = str(uuid.uuid4())
|
19 |
+
return _session_id
|
20 |
+
|
21 |
+
def get_user_id():
|
22 |
+
"""
|
23 |
+
Get the current user ID.
|
24 |
+
Creates a new ID if one doesn't exist.
|
25 |
+
"""
|
26 |
+
global _user_id
|
27 |
+
if _user_id is None:
|
28 |
+
_user_id = str(uuid.uuid4())
|
29 |
+
return _user_id
|
30 |
+
|
31 |
+
def reset_ids():
|
32 |
+
"""
|
33 |
+
Reset both session and user IDs.
|
34 |
+
Useful for testing or when starting a new session.
|
35 |
+
"""
|
36 |
+
global _session_id, _user_id
|
37 |
+
_session_id = None
|
38 |
+
_user_id = None
|
39 |
+
|
40 |
+
def format_source_documents(source_documents):
|
41 |
+
"""
|
42 |
+
Format source documents for display.
|
43 |
+
"""
|
44 |
+
if not source_documents:
|
45 |
+
return None
|
46 |
+
|
47 |
+
formatted_docs = []
|
48 |
+
for i, doc in enumerate(source_documents):
|
49 |
+
if hasattr(doc, 'metadata') and doc.metadata:
|
50 |
+
source = doc.metadata.get('source', 'Unknown')
|
51 |
+
formatted_docs.append(f"Source {i+1}: {source}")
|
52 |
+
|
53 |
+
return "\n".join(formatted_docs) if formatted_docs else None
|