Upload 4 files
Browse filesBasic App with chat interface.
README.md
CHANGED
@@ -1,14 +1,82 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# GPT-Style Chat Assistant
|
2 |
+
|
3 |
+
A sophisticated Streamlit app that mimics the ChatGPT interface, using OpenAI's GPT models for advanced reasoning capabilities.
|
4 |
+
|
5 |
+

|
6 |
+
|
7 |
+
## Features
|
8 |
+
|
9 |
+
- **ChatGPT-like interface** with conversation sidebar
|
10 |
+
- **Multiple conversation management**:
|
11 |
+
- Create new chats
|
12 |
+
- Switch between conversations
|
13 |
+
- Delete old conversations
|
14 |
+
- **Conversation history** preserved between sessions
|
15 |
+
- **Demo mode** that works without an API key
|
16 |
+
- **Responsive design** that adapts to different screen sizes
|
17 |
+
|
18 |
+
## Setup
|
19 |
+
|
20 |
+
1. Clone this repository
|
21 |
+
2. Install the required packages:
|
22 |
+
```
|
23 |
+
pip install -r requirements.txt
|
24 |
+
```
|
25 |
+
3. (Required for full functionality) Get an OpenAI API key:
|
26 |
+
- Create an account at [OpenAI](https://platform.openai.com/signup)
|
27 |
+
- Go to [API Keys](https://platform.openai.com/account/api-keys)
|
28 |
+
- Create a new API key
|
29 |
+
- Add it to a `.env` file in the root directory:
|
30 |
+
```
|
31 |
+
OPENAI_API_KEY=your_api_key_here
|
32 |
+
```
|
33 |
+
|
34 |
+
## Running Locally
|
35 |
+
|
36 |
+
Run the app with the following command:
|
37 |
+
|
38 |
+
```
|
39 |
+
streamlit run app.py
|
40 |
+
```
|
41 |
+
|
42 |
+
If you're having trouble with the `streamlit` command not being found, you can use the full path to the executable:
|
43 |
+
```
|
44 |
+
python -m streamlit run app.py
|
45 |
+
```
|
46 |
+
|
47 |
+
Or use the provided runner script:
|
48 |
+
```
|
49 |
+
python run_app.py
|
50 |
+
```
|
51 |
+
|
52 |
+
This will start the app and open it in your default browser.
|
53 |
+
|
54 |
+
## Demo Mode
|
55 |
+
|
56 |
+
The app includes a demo mode that works without an API key. In this mode:
|
57 |
+
- The assistant will provide pre-defined responses to common questions
|
58 |
+
- You'll see a text field in the sidebar where you can optionally enter an OpenAI API key
|
59 |
+
- For a full chatbot experience with reasoning capabilities, it's recommended to add your OpenAI API key
|
60 |
+
|
61 |
+
## Deploying to Hugging Face Spaces
|
62 |
+
|
63 |
+
1. Create a new Space on Hugging Face Spaces (https://huggingface.co/spaces)
|
64 |
+
2. Choose Streamlit as the SDK
|
65 |
+
3. Link your GitHub repository or upload the files directly
|
66 |
+
4. Add your OpenAI API key as a secret:
|
67 |
+
- Go to Settings > Repository Secrets
|
68 |
+
- Add a secret named `OPENAI_API_KEY` with your key
|
69 |
+
|
70 |
+
The app will automatically deploy and be available at your Hugging Face Spaces URL.
|
71 |
+
|
72 |
+
## About the Model
|
73 |
+
|
74 |
+
This app uses OpenAI's GPT-3.5-Turbo model, which is a powerful language model with excellent reasoning capabilities. You'll need an OpenAI API key to access this model.
|
75 |
+
|
76 |
+
## Customization
|
77 |
+
|
78 |
+
You can easily customize the app by:
|
79 |
+
- Changing the model in the `get_ai_response` function
|
80 |
+
- Adjusting the styling in the CSS section
|
81 |
+
- Adding more features to the sidebar
|
82 |
+
- Creating additional response templates for the demo mode
|
Spacefile
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Spacefile Docs: https://huggingface.co/docs/hub/spaces-config-reference
|
2 |
+
space_title: "GPT-Style Chat Assistant"
|
3 |
+
emoji: 🤖
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: green
|
6 |
+
sdk: streamlit
|
7 |
+
sdk_version: 1.24.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
app.py
ADDED
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
import requests
|
3 |
+
from dotenv import load_dotenv
|
4 |
+
import os
|
5 |
+
import sys
|
6 |
+
import time
|
7 |
+
import random
|
8 |
+
import uuid
|
9 |
+
from datetime import datetime
|
10 |
+
import openai
|
11 |
+
|
12 |
+
# Load environment variables
|
13 |
+
load_dotenv()
|
14 |
+
|
15 |
+
# Set page config with a wider layout
|
16 |
+
st.set_page_config(
|
17 |
+
page_title="GPT-Style Chat Assistant",
|
18 |
+
page_icon="🤖",
|
19 |
+
layout="wide"
|
20 |
+
)
|
21 |
+
|
22 |
+
# Add custom CSS for better styling
|
23 |
+
st.markdown("""
|
24 |
+
<style>
|
25 |
+
.main-content {
|
26 |
+
max-width: 800px;
|
27 |
+
margin: 0 auto;
|
28 |
+
padding: 1rem;
|
29 |
+
}
|
30 |
+
.chat-message {
|
31 |
+
padding: 1.5rem;
|
32 |
+
border-radius: 0.5rem;
|
33 |
+
margin-bottom: 1rem;
|
34 |
+
display: flex;
|
35 |
+
flex-direction: column;
|
36 |
+
}
|
37 |
+
.user-message {
|
38 |
+
background-color: #f0f2f6;
|
39 |
+
}
|
40 |
+
.assistant-message {
|
41 |
+
background-color: #e6f3f7;
|
42 |
+
}
|
43 |
+
.chat-input {
|
44 |
+
position: fixed;
|
45 |
+
bottom: 0;
|
46 |
+
width: 100%;
|
47 |
+
padding: 1rem;
|
48 |
+
background-color: white;
|
49 |
+
}
|
50 |
+
.sidebar-conv {
|
51 |
+
padding: 0.5rem 1rem;
|
52 |
+
border-radius: 0.5rem;
|
53 |
+
margin-bottom: 0.5rem;
|
54 |
+
cursor: pointer;
|
55 |
+
transition: background-color 0.3s;
|
56 |
+
}
|
57 |
+
.sidebar-conv:hover {
|
58 |
+
background-color: #f0f2f6;
|
59 |
+
}
|
60 |
+
.active-conv {
|
61 |
+
background-color: #e6f3f7;
|
62 |
+
font-weight: bold;
|
63 |
+
}
|
64 |
+
</style>
|
65 |
+
""", unsafe_allow_html=True)
|
66 |
+
|
67 |
+
# Initialize session state
|
68 |
+
if "conversations" not in st.session_state:
|
69 |
+
st.session_state.conversations = {}
|
70 |
+
|
71 |
+
if "current_conversation_id" not in st.session_state:
|
72 |
+
new_id = str(uuid.uuid4())
|
73 |
+
st.session_state.current_conversation_id = new_id
|
74 |
+
st.session_state.conversations[new_id] = {
|
75 |
+
"title": f"New chat {datetime.now().strftime('%H:%M')}",
|
76 |
+
"messages": []
|
77 |
+
}
|
78 |
+
|
79 |
+
# Get OpenAI API key from environment or let user enter it
|
80 |
+
openai_api_key = os.getenv("OPENAI_API_KEY")
|
81 |
+
|
82 |
+
# Configure OpenAI client
|
83 |
+
if openai_api_key:
|
84 |
+
client = openai.OpenAI(api_key=openai_api_key)
|
85 |
+
|
86 |
+
# Function to call OpenAI API
|
87 |
+
def get_ai_response(prompt, history):
|
88 |
+
# Use demo mode if no API key is provided
|
89 |
+
if not openai_api_key:
|
90 |
+
return get_demo_response(prompt)
|
91 |
+
|
92 |
+
try:
|
93 |
+
# Format messages for OpenAI
|
94 |
+
messages = []
|
95 |
+
|
96 |
+
# Add system message
|
97 |
+
messages.append({
|
98 |
+
"role": "system",
|
99 |
+
"content": "You are a helpful assistant that provides clear, concise, and accurate information."
|
100 |
+
})
|
101 |
+
|
102 |
+
# Add conversation history
|
103 |
+
for msg in history:
|
104 |
+
messages.append({
|
105 |
+
"role": msg["role"],
|
106 |
+
"content": msg["content"]
|
107 |
+
})
|
108 |
+
|
109 |
+
# Add the current prompt
|
110 |
+
messages.append({
|
111 |
+
"role": "user",
|
112 |
+
"content": prompt
|
113 |
+
})
|
114 |
+
|
115 |
+
# Call OpenAI API
|
116 |
+
response = client.chat.completions.create(
|
117 |
+
model="gpt-3.5-turbo",
|
118 |
+
messages=messages,
|
119 |
+
temperature=0.7,
|
120 |
+
max_tokens=500,
|
121 |
+
stream=False
|
122 |
+
)
|
123 |
+
|
124 |
+
# Extract the response
|
125 |
+
return response.choices[0].message.content
|
126 |
+
|
127 |
+
except Exception as e:
|
128 |
+
st.error(f"An error occurred: {str(e)}")
|
129 |
+
return "I'm sorry, I encountered an error while processing your request. Please check your OpenAI API key or try again later."
|
130 |
+
|
131 |
+
# Demo mode responses for when no API key is available
|
132 |
+
def get_demo_response(prompt):
|
133 |
+
prompt_lower = prompt.lower()
|
134 |
+
|
135 |
+
# Simple response templates
|
136 |
+
greetings = [
|
137 |
+
"Hello! How can I assist you today?",
|
138 |
+
"Hi there! I'm a demo AI assistant. What can I help you with?",
|
139 |
+
"Greetings! I'm running in demo mode. Feel free to ask simple questions."
|
140 |
+
]
|
141 |
+
|
142 |
+
farewells = [
|
143 |
+
"Goodbye! Have a great day!",
|
144 |
+
"Farewell! Come back soon!",
|
145 |
+
"Take care! It was nice chatting with you."
|
146 |
+
]
|
147 |
+
|
148 |
+
info_responses = [
|
149 |
+
"I'm a simple AI assistant running in demo mode. To use the full features, please provide an OpenAI API key.",
|
150 |
+
"This is a demo version with limited capabilities. For a better experience, add your OpenAI API key.",
|
151 |
+
"I'm just demonstrating basic functionality. Get a free API key from OpenAI to unlock my full potential!"
|
152 |
+
]
|
153 |
+
|
154 |
+
reasoning_examples = [
|
155 |
+
"This is a demonstration of how I would process a reasoning task. In a real scenario with the full model, I would analyze the problem step by step, consider multiple angles, and provide a detailed explanation.",
|
156 |
+
"When solving problems, I would typically break them down into smaller parts, examine each component, and build towards a comprehensive solution. This demo just simulates that process.",
|
157 |
+
"Reasoning typically involves identifying key facts, applying logical rules, and drawing conclusions based on available information. With a proper API key, I could demonstrate this more effectively."
|
158 |
+
]
|
159 |
+
|
160 |
+
# Simple pattern matching
|
161 |
+
if any(word in prompt_lower for word in ["hello", "hi", "hey", "greetings"]):
|
162 |
+
return random.choice(greetings)
|
163 |
+
elif any(word in prompt_lower for word in ["bye", "goodbye", "farewell", "see you"]):
|
164 |
+
return random.choice(farewells)
|
165 |
+
elif any(phrase in prompt_lower for phrase in ["who are you", "what are you", "tell me about yourself", "what can you do"]):
|
166 |
+
return random.choice(info_responses)
|
167 |
+
elif any(word in prompt_lower for word in ["think", "reason", "analyze", "solve", "explain", "why", "how"]):
|
168 |
+
return random.choice(reasoning_examples)
|
169 |
+
elif "weather" in prompt_lower:
|
170 |
+
return "I'm sorry, I don't have access to real-time weather data in demo mode."
|
171 |
+
elif any(word in prompt_lower for word in ["help", "assist", "support"]):
|
172 |
+
return "To get better assistance, please add your OpenAI API key. You can get one for free at https://platform.openai.com/account/api-keys."
|
173 |
+
else:
|
174 |
+
return "I'm running in demo mode with limited responses. For a full conversation experience, please add your OpenAI API key above."
|
175 |
+
|
176 |
+
# Function to create a new conversation
|
177 |
+
def create_new_chat():
|
178 |
+
new_id = str(uuid.uuid4())
|
179 |
+
st.session_state.current_conversation_id = new_id
|
180 |
+
st.session_state.conversations[new_id] = {
|
181 |
+
"title": f"New chat {datetime.now().strftime('%H:%M')}",
|
182 |
+
"messages": []
|
183 |
+
}
|
184 |
+
|
185 |
+
# Function to update conversation title based on first message
|
186 |
+
def update_conversation_title(conv_id, user_message):
|
187 |
+
current_title = st.session_state.conversations[conv_id]["title"]
|
188 |
+
if current_title.startswith("New chat"):
|
189 |
+
# Limit title length to prevent overflow
|
190 |
+
new_title = user_message[:30] + "..." if len(user_message) > 30 else user_message
|
191 |
+
st.session_state.conversations[conv_id]["title"] = new_title
|
192 |
+
|
193 |
+
# Function to delete a conversation
|
194 |
+
def delete_conversation(conv_id):
|
195 |
+
if conv_id in st.session_state.conversations:
|
196 |
+
del st.session_state.conversations[conv_id]
|
197 |
+
# If we deleted the current conversation, set a new one
|
198 |
+
if conv_id == st.session_state.current_conversation_id:
|
199 |
+
if st.session_state.conversations:
|
200 |
+
st.session_state.current_conversation_id = next(iter(st.session_state.conversations))
|
201 |
+
else:
|
202 |
+
create_new_chat()
|
203 |
+
|
204 |
+
# Create a two-column layout
|
205 |
+
sidebar, main_content = st.columns([1, 3])
|
206 |
+
|
207 |
+
# Sidebar (conversation history)
|
208 |
+
with sidebar:
|
209 |
+
st.sidebar.title("Conversations")
|
210 |
+
|
211 |
+
# Add a new chat button
|
212 |
+
if st.sidebar.button("+ New Chat", use_container_width=True):
|
213 |
+
create_new_chat()
|
214 |
+
st.rerun()
|
215 |
+
|
216 |
+
st.sidebar.markdown("---")
|
217 |
+
|
218 |
+
# API token input in sidebar if not available
|
219 |
+
if not openai_api_key:
|
220 |
+
st.sidebar.info("⚠️ No OpenAI API key found.", icon="ℹ️")
|
221 |
+
entered_token = st.sidebar.text_input("Enter OpenAI API Key", type="password")
|
222 |
+
if entered_token:
|
223 |
+
openai_api_key = entered_token
|
224 |
+
client = openai.OpenAI(api_key=openai_api_key)
|
225 |
+
|
226 |
+
st.sidebar.markdown("---")
|
227 |
+
|
228 |
+
# Display conversation history
|
229 |
+
for conv_id, conv_data in st.session_state.conversations.items():
|
230 |
+
col1, col2 = st.sidebar.columns([4, 1])
|
231 |
+
is_active = conv_id == st.session_state.current_conversation_id
|
232 |
+
|
233 |
+
with col1:
|
234 |
+
if st.button(
|
235 |
+
conv_data["title"],
|
236 |
+
key=f"conv_{conv_id}",
|
237 |
+
use_container_width=True,
|
238 |
+
type="secondary" if is_active else "tertiary"
|
239 |
+
):
|
240 |
+
st.session_state.current_conversation_id = conv_id
|
241 |
+
st.rerun()
|
242 |
+
|
243 |
+
with col2:
|
244 |
+
if st.button("🗑️", key=f"del_{conv_id}"):
|
245 |
+
delete_conversation(conv_id)
|
246 |
+
st.rerun()
|
247 |
+
|
248 |
+
# Main content area
|
249 |
+
with main_content:
|
250 |
+
st.write("") # Add some space at the top
|
251 |
+
|
252 |
+
# Get current conversation
|
253 |
+
current_id = st.session_state.current_conversation_id
|
254 |
+
current_conv = st.session_state.conversations.get(current_id, {"messages": []})
|
255 |
+
messages = current_conv["messages"]
|
256 |
+
|
257 |
+
# Create a container for the chat area (scrollable)
|
258 |
+
chat_container = st.container()
|
259 |
+
|
260 |
+
# Display chat messages
|
261 |
+
with chat_container:
|
262 |
+
for i, message in enumerate(messages):
|
263 |
+
with st.chat_message(message["role"]):
|
264 |
+
st.markdown(message["content"])
|
265 |
+
|
266 |
+
# Chat input at the bottom
|
267 |
+
prompt = st.chat_input("What's on your mind?")
|
268 |
+
|
269 |
+
if prompt:
|
270 |
+
# Add user message to the current conversation
|
271 |
+
messages.append({"role": "user", "content": prompt})
|
272 |
+
|
273 |
+
# Update conversation title if this is the first message
|
274 |
+
if len(messages) == 1:
|
275 |
+
update_conversation_title(current_id, prompt)
|
276 |
+
|
277 |
+
# Display user message
|
278 |
+
with st.chat_message("user"):
|
279 |
+
st.markdown(prompt)
|
280 |
+
|
281 |
+
# Display assistant response with typing animation
|
282 |
+
with st.chat_message("assistant"):
|
283 |
+
message_placeholder = st.empty()
|
284 |
+
|
285 |
+
# Get response from AI
|
286 |
+
full_response = get_ai_response(prompt, messages[:-1])
|
287 |
+
|
288 |
+
# Simulate typing
|
289 |
+
displayed_response = ""
|
290 |
+
for i in range(len(full_response)):
|
291 |
+
displayed_response += full_response[i]
|
292 |
+
message_placeholder.markdown(displayed_response + "▌")
|
293 |
+
time.sleep(0.005) # Slightly faster typing
|
294 |
+
|
295 |
+
# Display final response
|
296 |
+
message_placeholder.markdown(full_response)
|
297 |
+
|
298 |
+
# Add assistant response to the conversation
|
299 |
+
messages.append({"role": "assistant", "content": full_response})
|
300 |
+
|
301 |
+
# Force a rerun to update the sidebar with the new conversation title
|
302 |
+
st.rerun()
|
requirements.txt
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
streamlit>=1.24.0
|
2 |
+
requests>=2.31.0
|
3 |
+
python-dotenv>=1.0.0
|
4 |
+
openai>=1.0.0
|
5 |
+
uuid>=1.30
|
6 |
+
datetime>=5.0
|