Upload folder using huggingface_hub
Browse files- README.md +24 -0
- standalone/horizontal.svg +2 -0
- standalone/index.html +142 -0
- standalone/new-indicator.svg +12 -0
- standalone/script.js +644 -0
- standalone/send.svg +4 -0
- standalone/server/main.py +405 -0
- standalone/settings.svg +5 -0
- standalone/style.css +709 -0
- standalone/vertical.svg +2 -0
README.md
CHANGED
@@ -251,3 +251,27 @@ $ python main.py # or gradio main.py
|
|
251 |
|
252 |
# Acknowledgments
|
253 |
This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
251 |
|
252 |
# Acknowledgments
|
253 |
This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
|
254 |
+
# AdaptSum
|
255 |
+
|
256 |
+
AdaptSum stands for Adaptive Summarization. This project focuses on developing an LLM-powered system for dynamic summarization. Instead of generating entirely new summaries with each update, the system intelligently identifies and modifies only the necessary parts of the existing summary. This approach aims to create a more efficient and fluid summarization process within a continuous chat interaction with an LLM.
|
257 |
+
|
258 |
+
# Instructions
|
259 |
+
|
260 |
+
1. Install dependencies
|
261 |
+
```shell
|
262 |
+
$ pip install requirements.txt
|
263 |
+
```
|
264 |
+
|
265 |
+
2. Setup Gemini API Key
|
266 |
+
```shell
|
267 |
+
$ export GEMINI_API_KEY=xxxxx
|
268 |
+
```
|
269 |
+
> note that GEMINI API KEY should be obtained from Google AI Studio. Vertex AI is not supported at the moment (this is because Gemini SDK does not provide file uploading functionality for Vertex AI usage now).
|
270 |
+
|
271 |
+
3. Run Gradio app
|
272 |
+
```shell
|
273 |
+
$ python main.py # or gradio main.py
|
274 |
+
```
|
275 |
+
|
276 |
+
# Acknowledgments
|
277 |
+
This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
|
standalone/horizontal.svg
ADDED
|
standalone/index.html
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<html lang="en">
|
3 |
+
<head>
|
4 |
+
<meta charset="UTF-8" />
|
5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
6 |
+
<title>Chat UI with Streaming ChatGPT API, Markdown Messages, Layout Toggle, Summary & Settings</title>
|
7 |
+
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@400;500&display=swap" rel="stylesheet">
|
8 |
+
<!-- Include marked.js for Markdown rendering -->
|
9 |
+
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
|
10 |
+
<link rel="stylesheet" href="style.css">
|
11 |
+
</head>
|
12 |
+
<body>
|
13 |
+
<div class="app-container">
|
14 |
+
<!-- Left Navigation Bar -->
|
15 |
+
<div class="nav-bar" id="navBar">
|
16 |
+
<div class="nav-header">
|
17 |
+
<!-- Hamburger & New Chat will appear here when nav is NOT collapsed -->
|
18 |
+
<button id="hamburgerBtn">☰</button>
|
19 |
+
<button class="new-session-btn" id="newSessionBtn">
|
20 |
+
<img src="new-indicator.svg" alt="Icon" class="svg-icon">
|
21 |
+
</button>
|
22 |
+
<button id="toggleLayoutBtn"><img src="vertical-layout.svg" alt="Icon" class="svg-icon"></button>
|
23 |
+
</div>
|
24 |
+
<!-- <h3>Chat History</h3> -->
|
25 |
+
<ul id="sessionList"></ul>
|
26 |
+
</div>
|
27 |
+
|
28 |
+
<!-- Main Chat Wrapper -->
|
29 |
+
<div class="chat-wrapper">
|
30 |
+
<!-- Chat Header -->
|
31 |
+
<div class="chat-header">
|
32 |
+
<!-- Left portion for hamburger & new chat when collapsed -->
|
33 |
+
<div class="header-left" id="headerLeft"></div>
|
34 |
+
|
35 |
+
<!-- Center portion for the chat title -->
|
36 |
+
<div class="chat-title-controls">
|
37 |
+
<h2 id="chatTitle">Chat Session 1</h2>
|
38 |
+
<button id="editTitleBtn">Edit Title</button>
|
39 |
+
</div>
|
40 |
+
|
41 |
+
<!-- Right portion for turn label -->
|
42 |
+
<div id="turnLabel">Turn: 0 / 0</div>
|
43 |
+
</div>
|
44 |
+
|
45 |
+
<!-- Carousel Wrapper -->
|
46 |
+
<div class="carousel-wrapper" id="carouselWrapper">
|
47 |
+
<button id="prevBtn" class="nav">
|
48 |
+
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#444" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
49 |
+
<polyline points="15 18 9 12 15 6"></polyline>
|
50 |
+
</svg>
|
51 |
+
</button>
|
52 |
+
<div class="carousel" id="carousel"></div>
|
53 |
+
<button id="nextBtn" class="nav">
|
54 |
+
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#444" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
55 |
+
<polyline points="9 18 15 12 9 6"></polyline>
|
56 |
+
</svg>
|
57 |
+
</button>
|
58 |
+
</div>
|
59 |
+
|
60 |
+
<!-- Input Section -->
|
61 |
+
<div class="input-container">
|
62 |
+
<div class="button-row">
|
63 |
+
<button id="customBtn1">TLDR</button>
|
64 |
+
<button id="customBtn2"><img src="settings.svg" alt="Icon" class="svg-icon"></button>
|
65 |
+
</div>
|
66 |
+
<div id="fileAttachments" class="file-attachments"></div>
|
67 |
+
<div class="input-row">
|
68 |
+
<button id="attachBtn" class="attach-button">+</button>
|
69 |
+
<input type="file" id="fileInput" multiple accept="image/*,.pdf">
|
70 |
+
<textarea id="chatInput" placeholder="Ask Anything"></textarea>
|
71 |
+
<button id="sendBtn">
|
72 |
+
<img src="send.svg" alt="Icon" class="svg-icon-non-white">
|
73 |
+
</button>
|
74 |
+
</div>
|
75 |
+
</div>
|
76 |
+
</div>
|
77 |
+
</div>
|
78 |
+
|
79 |
+
<!-- Summary Overlay -->
|
80 |
+
<div id="summaryOverlay">
|
81 |
+
<div class="summary-header">
|
82 |
+
<span>Chat Summary</span>
|
83 |
+
<div class="summary-header-buttons">
|
84 |
+
<button id="downloadSummary" class="download-summary">Download</button>
|
85 |
+
<button class="close-summary" id="closeSummaryBtn">×</button>
|
86 |
+
</div>
|
87 |
+
</div>
|
88 |
+
<div class="summary-content" id="summaryContent"></div>
|
89 |
+
</div>
|
90 |
+
|
91 |
+
<!-- Settings Overlay -->
|
92 |
+
<div id="settingsOverlay">
|
93 |
+
<div class="settings-header">
|
94 |
+
<span>Settings</span>
|
95 |
+
<button class="close-settings" id="closeSettingsBtn">×</button>
|
96 |
+
</div>
|
97 |
+
<div class="settings-content">
|
98 |
+
<form id="settingsForm">
|
99 |
+
<div class="settings-group">
|
100 |
+
<label for="temperature">Temperature:</label>
|
101 |
+
<input type="range" id="temperature" name="temperature" min="0" max="1" step="0.01">
|
102 |
+
<span id="temperatureValue"></span>
|
103 |
+
</div>
|
104 |
+
<div class="settings-group">
|
105 |
+
<label for="maxTokens">Max Tokens:</label>
|
106 |
+
<input type="number" id="maxTokens" name="maxTokens" min="10" max="2048">
|
107 |
+
</div>
|
108 |
+
<div class="settings-section">
|
109 |
+
<h3>Model Selection</h3>
|
110 |
+
<div class="settings-group">
|
111 |
+
<label for="modelSelect">Model:</label>
|
112 |
+
<select id="modelSelect" name="modelSelect">
|
113 |
+
<optgroup label="OpenAI">
|
114 |
+
<option value="gpt-4o">gpt-4o</option>
|
115 |
+
<option value="gpt-4o-mini">gpt-4o-mini</option>
|
116 |
+
</optgroup>
|
117 |
+
<optgroup label="Anthropic">
|
118 |
+
<option value="claude-3.5-sonnet">claude-3.5-sonnet</option>
|
119 |
+
<option value="claude-3.7-sonnet">claude-3.7-sonnet</option>
|
120 |
+
</optgroup>
|
121 |
+
<optgroup label="Google">
|
122 |
+
<option value="gemini-2.0-flash">Gemini 2.0 Flash</option>
|
123 |
+
<option value="gemini-2.0-flash-lite">Gemini 2.0 Flash Lite</option>
|
124 |
+
</optgroup>
|
125 |
+
</select>
|
126 |
+
</div>
|
127 |
+
</div>
|
128 |
+
<div class="settings-group">
|
129 |
+
<label for="persona">Persona:</label>
|
130 |
+
<select id="persona" name="persona">
|
131 |
+
<option value="professional">Professional</option>
|
132 |
+
<option value="friendly">Friendly</option>
|
133 |
+
</select>
|
134 |
+
</div>
|
135 |
+
<button type="button" id="saveSettings" class="save-settings">Save Settings</button>
|
136 |
+
</form>
|
137 |
+
</div>
|
138 |
+
</div>
|
139 |
+
|
140 |
+
<script src="script.js"></script>
|
141 |
+
</body>
|
142 |
+
</html>
|
standalone/new-indicator.svg
ADDED
|
standalone/script.js
ADDED
@@ -0,0 +1,644 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
// ----------------- ChatGPT API (Stream Mode) -----------------
|
2 |
+
// Define a constant for the model name so it is used in both API call and UI.
|
3 |
+
const MODEL_NAME = "gpt-4o-mini";
|
4 |
+
|
5 |
+
// Modified to scroll the current card to the bottom after updating the message.
|
6 |
+
function updateLastMessage(content) {
|
7 |
+
const session = sessions[currentSessionIndex];
|
8 |
+
session.messages[session.messages.length - 1].aiResponse = content;
|
9 |
+
renderCurrentSession();
|
10 |
+
// Auto-scroll the current card to the bottom:
|
11 |
+
const cards = document.querySelectorAll('.card');
|
12 |
+
if (cards[currentCardIndex]) {
|
13 |
+
cards[currentCardIndex].scrollTop = cards[currentCardIndex].scrollHeight;
|
14 |
+
}
|
15 |
+
}
|
16 |
+
|
17 |
+
// ----------------- Layout and Navigation -----------------
|
18 |
+
let isTraditionalLayout = false;
|
19 |
+
const toggleLayoutBtn = document.getElementById('toggleLayoutBtn');
|
20 |
+
const carouselWrapper = document.getElementById('carouselWrapper');
|
21 |
+
function updateLayout() {
|
22 |
+
if (isTraditionalLayout) {
|
23 |
+
carousel.classList.add('traditional');
|
24 |
+
carouselWrapper.classList.add('traditional-mode');
|
25 |
+
prevBtn.style.display = 'none';
|
26 |
+
nextBtn.style.display = 'none';
|
27 |
+
toggleLayoutBtn.innerHTML = `<img src="vertical.svg" alt="Icon" class="svg-icon">`;
|
28 |
+
} else {
|
29 |
+
carousel.classList.remove('traditional');
|
30 |
+
carouselWrapper.classList.remove('traditional-mode');
|
31 |
+
prevBtn.style.display = '';
|
32 |
+
nextBtn.style.display = '';
|
33 |
+
toggleLayoutBtn.innerHTML = `<img src="horizontal.svg" alt="Icon" class="svg-icon">`;
|
34 |
+
}
|
35 |
+
updateTurnLabel(sessionMessagesCount());
|
36 |
+
}
|
37 |
+
toggleLayoutBtn.addEventListener('click', function() {
|
38 |
+
isTraditionalLayout = !isTraditionalLayout;
|
39 |
+
updateLayout();
|
40 |
+
});
|
41 |
+
|
42 |
+
// This function will move hamburger + new chat button between nav-bar and the chat header
|
43 |
+
function updateHamburgerPosition() {
|
44 |
+
const hamburgerBtn = document.getElementById('hamburgerBtn');
|
45 |
+
const newSessionBtn = document.getElementById('newSessionBtn');
|
46 |
+
const toggleLayoutBtn = document.getElementById('toggleLayoutBtn');
|
47 |
+
const navBar = document.getElementById('navBar');
|
48 |
+
const navHeader = navBar.querySelector('.nav-header');
|
49 |
+
const headerLeft = document.getElementById('headerLeft');
|
50 |
+
if (navBar.classList.contains('hidden')) {
|
51 |
+
// Move both hamburger and new chat to header-left
|
52 |
+
headerLeft.appendChild(hamburgerBtn);
|
53 |
+
headerLeft.appendChild(newSessionBtn);
|
54 |
+
headerLeft.appendChild(toggleLayoutBtn);
|
55 |
+
} else {
|
56 |
+
// Move both back to navHeader
|
57 |
+
navHeader.appendChild(hamburgerBtn);
|
58 |
+
navHeader.appendChild(newSessionBtn);
|
59 |
+
navHeader.appendChild(toggleLayoutBtn);
|
60 |
+
}
|
61 |
+
}
|
62 |
+
|
63 |
+
// ----------------- Session Management -----------------
|
64 |
+
let sessions = [];
|
65 |
+
let currentSessionIndex = 0;
|
66 |
+
let currentCardIndex = 0;
|
67 |
+
function initSessions() {
|
68 |
+
sessions.push({
|
69 |
+
id: Date.now(),
|
70 |
+
name: "Chat Session 1",
|
71 |
+
title: "Chat Session 1",
|
72 |
+
messages: [],
|
73 |
+
summary: "# Chat Summary\n\nThis is the default summary for Chat Session 1.",
|
74 |
+
settings: {
|
75 |
+
temperature: 0.7,
|
76 |
+
maxTokens: 256,
|
77 |
+
persona: "professional",
|
78 |
+
model: "gpt-4o-mini" // <-- new property
|
79 |
+
}
|
80 |
+
});
|
81 |
+
currentSessionIndex = 0;
|
82 |
+
currentCardIndex = 0;
|
83 |
+
renderSessionList();
|
84 |
+
renderCurrentSession();
|
85 |
+
}
|
86 |
+
function renderSessionList() {
|
87 |
+
const sessionList = document.getElementById('sessionList');
|
88 |
+
sessionList.innerHTML = "";
|
89 |
+
sessions.forEach((session, index) => {
|
90 |
+
const li = document.createElement('li');
|
91 |
+
const nameSpan = document.createElement('span');
|
92 |
+
nameSpan.textContent = session.name;
|
93 |
+
li.appendChild(nameSpan);
|
94 |
+
const removeBtn = document.createElement('button');
|
95 |
+
removeBtn.textContent = "𝘅";
|
96 |
+
removeBtn.className = "remove-session";
|
97 |
+
removeBtn.addEventListener('click', (e) => {
|
98 |
+
e.stopPropagation();
|
99 |
+
removeSession(index);
|
100 |
+
});
|
101 |
+
li.appendChild(removeBtn);
|
102 |
+
li.addEventListener('click', () => {
|
103 |
+
currentSessionIndex = index;
|
104 |
+
currentCardIndex = 0;
|
105 |
+
renderSessionList();
|
106 |
+
renderCurrentSession();
|
107 |
+
});
|
108 |
+
if (index === currentSessionIndex) li.classList.add('active');
|
109 |
+
sessionList.appendChild(li);
|
110 |
+
});
|
111 |
+
}
|
112 |
+
document.getElementById('newSessionBtn').addEventListener('click', () => {
|
113 |
+
const newSession = {
|
114 |
+
id: Date.now(),
|
115 |
+
name: "Chat Session " + (sessions.length + 1),
|
116 |
+
title: "Chat Session " + (sessions.length + 1),
|
117 |
+
messages: [],
|
118 |
+
summary: "# Chat Summary\n\nThis is the default summary for Chat Session " + (sessions.length + 1) + ".",
|
119 |
+
settings: {
|
120 |
+
temperature: 0.7,
|
121 |
+
maxTokens: 256,
|
122 |
+
persona: "professional",
|
123 |
+
model: "gpt-4o-mini" // <-- default model
|
124 |
+
}
|
125 |
+
};
|
126 |
+
|
127 |
+
sessions.push(newSession);
|
128 |
+
currentSessionIndex = sessions.length - 1;
|
129 |
+
currentCardIndex = 0;
|
130 |
+
renderSessionList();
|
131 |
+
renderCurrentSession();
|
132 |
+
});
|
133 |
+
function removeSession(index) {
|
134 |
+
sessions.splice(index, 1);
|
135 |
+
if (sessions.length === 0) {
|
136 |
+
initSessions();
|
137 |
+
} else {
|
138 |
+
if (currentSessionIndex >= sessions.length) {
|
139 |
+
currentSessionIndex = sessions.length - 1;
|
140 |
+
}
|
141 |
+
currentCardIndex = 0;
|
142 |
+
}
|
143 |
+
renderSessionList();
|
144 |
+
renderCurrentSession();
|
145 |
+
}
|
146 |
+
// ----------------- Carousel Rendering / Traditional Layout -----------------
|
147 |
+
const carousel = document.getElementById('carousel');
|
148 |
+
function renderCurrentSession() {
|
149 |
+
const session = sessions[currentSessionIndex];
|
150 |
+
carousel.innerHTML = "";
|
151 |
+
session.messages.forEach(message => {
|
152 |
+
const card = document.createElement('div');
|
153 |
+
card.className = 'card';
|
154 |
+
let attachmentHTML = "";
|
155 |
+
if (message.attachments && message.attachments.length > 0) {
|
156 |
+
attachmentHTML = `
|
157 |
+
<div class="vertical-file-list">
|
158 |
+
${message.attachments.map(name => `<div class="file-item-vertical">${name}</div>`).join("")}
|
159 |
+
</div>
|
160 |
+
`;
|
161 |
+
}
|
162 |
+
// Order: User message then AI message, rendered in Markdown
|
163 |
+
card.innerHTML = `
|
164 |
+
<div class="conversation">
|
165 |
+
<div class="message user">
|
166 |
+
${attachmentHTML}
|
167 |
+
<div class="message-text markdown-body">${marked.parse(message.userText)}</div>
|
168 |
+
</div>
|
169 |
+
<div class="message ai">
|
170 |
+
<div class="message-text markdown-body">${marked.parse(message.aiResponse)}</div>
|
171 |
+
<div class="ai-status">${message.model || session.settings.model}</div>
|
172 |
+
</div>
|
173 |
+
</div>
|
174 |
+
`;
|
175 |
+
carousel.appendChild(card);
|
176 |
+
processMessagesInContainer(card);
|
177 |
+
});
|
178 |
+
currentCardIndex = session.messages.length > 0 ? session.messages.length - 1 : 0;
|
179 |
+
updateCarousel();
|
180 |
+
updateLayout();
|
181 |
+
document.getElementById('chatTitle').textContent = session.title;
|
182 |
+
}
|
183 |
+
function updateCarousel() {
|
184 |
+
if (!isTraditionalLayout) {
|
185 |
+
const cards = document.querySelectorAll('.card');
|
186 |
+
carousel.style.transform = `translateX(-${currentCardIndex * 100}%)`;
|
187 |
+
}
|
188 |
+
updateTurnLabel(sessionMessagesCount());
|
189 |
+
}
|
190 |
+
function sessionMessagesCount() {
|
191 |
+
return sessions[currentSessionIndex].messages.length;
|
192 |
+
}
|
193 |
+
function updateTurnLabel(totalCards) {
|
194 |
+
const turnLabel = document.getElementById('turnLabel');
|
195 |
+
if (isTraditionalLayout) {
|
196 |
+
turnLabel.textContent = `Turn: ${totalCards} / ${totalCards}`;
|
197 |
+
} else {
|
198 |
+
turnLabel.textContent = totalCards ? `Turn: ${currentCardIndex + 1} / ${totalCards}` : "Turn: 0 / 0";
|
199 |
+
}
|
200 |
+
}
|
201 |
+
// ----------------- Message Processing -----------------
|
202 |
+
function processMessage(messageEl) {
|
203 |
+
if (messageEl.dataset.processed) return;
|
204 |
+
const textEl = messageEl.querySelector('.message-text');
|
205 |
+
// Only add "Read more" toggle for user messages if text is long.
|
206 |
+
if (messageEl.classList.contains('user') && textEl.scrollHeight > 80) {
|
207 |
+
const toggleBtn = document.createElement('button');
|
208 |
+
toggleBtn.className = 'toggle-btn';
|
209 |
+
toggleBtn.textContent = 'Read more';
|
210 |
+
toggleBtn.addEventListener('click', function() {
|
211 |
+
if (messageEl.classList.contains('expanded')) {
|
212 |
+
messageEl.classList.remove('expanded');
|
213 |
+
toggleBtn.textContent = 'Read more';
|
214 |
+
} else {
|
215 |
+
messageEl.classList.add('expanded');
|
216 |
+
toggleBtn.textContent = 'Read less';
|
217 |
+
}
|
218 |
+
});
|
219 |
+
messageEl.appendChild(toggleBtn);
|
220 |
+
}
|
221 |
+
messageEl.dataset.processed = 'true';
|
222 |
+
}
|
223 |
+
function processMessagesInContainer(container) {
|
224 |
+
container.querySelectorAll('.message').forEach(processMessage);
|
225 |
+
}
|
226 |
+
// ----------------- Adding Conversation & Stream API Call -----------------
|
227 |
+
const attachedFiles = [];
|
228 |
+
async function addConversation(userText) {
|
229 |
+
if (userText.trim() === '' && attachedFiles.length === 0) return;
|
230 |
+
|
231 |
+
// Store message with attachments
|
232 |
+
sessions[currentSessionIndex].messages.push({
|
233 |
+
userText,
|
234 |
+
aiResponse: "",
|
235 |
+
attachments: attachedFiles.map(file => file.name), // Store file names
|
236 |
+
model: sessions[currentSessionIndex].settings.model
|
237 |
+
});
|
238 |
+
|
239 |
+
// Clear attachments after sending
|
240 |
+
clearFileAttachments();
|
241 |
+
|
242 |
+
renderCurrentSession();
|
243 |
+
|
244 |
+
const conversation = [];
|
245 |
+
sessions[currentSessionIndex].messages.forEach(msg => {
|
246 |
+
conversation.push({ role: "user", content: msg.userText });
|
247 |
+
if (msg.aiResponse) {
|
248 |
+
conversation.push({ role: "assistant", content: msg.aiResponse });
|
249 |
+
}
|
250 |
+
});
|
251 |
+
|
252 |
+
try {
|
253 |
+
const aiResponse = await callLLMStream(conversation);
|
254 |
+
sessions[currentSessionIndex].messages[sessions[currentSessionIndex].messages.length - 1].aiResponse = aiResponse;
|
255 |
+
renderCurrentSession();
|
256 |
+
} catch (err) {
|
257 |
+
console.error(err);
|
258 |
+
sessions[currentSessionIndex].messages[sessions[currentSessionIndex].messages.length - 1].aiResponse = "Error: " + err.message;
|
259 |
+
renderCurrentSession();
|
260 |
+
}
|
261 |
+
}
|
262 |
+
|
263 |
+
function clearFileAttachments() {
|
264 |
+
attachedFiles.length = 0;
|
265 |
+
updateFileAttachments();
|
266 |
+
}
|
267 |
+
// ----------------- Auto-resize Textarea -----------------
|
268 |
+
const chatInput = document.getElementById('chatInput');
|
269 |
+
chatInput.addEventListener('input', function() {
|
270 |
+
this.style.height = 'auto';
|
271 |
+
this.style.height = this.scrollHeight + 'px';
|
272 |
+
});
|
273 |
+
function resetTextarea() {
|
274 |
+
chatInput.style.height = '36px';
|
275 |
+
}
|
276 |
+
// ----------------- Send Message -----------------
|
277 |
+
const sendBtn = document.getElementById('sendBtn');
|
278 |
+
sendBtn.addEventListener('click', async () => {
|
279 |
+
const text = chatInput.value;
|
280 |
+
if (text.trim() !== '') {
|
281 |
+
await addConversation(text);
|
282 |
+
chatInput.value = '';
|
283 |
+
resetTextarea();
|
284 |
+
}
|
285 |
+
});
|
286 |
+
chatInput.addEventListener('keydown', function(e) {
|
287 |
+
if (e.key === 'Enter' && (e.ctrlKey || e.metaKey)) {
|
288 |
+
e.preventDefault();
|
289 |
+
sendBtn.click();
|
290 |
+
}
|
291 |
+
});
|
292 |
+
const prevBtn = document.getElementById('prevBtn');
|
293 |
+
const nextBtn = document.getElementById('nextBtn');
|
294 |
+
prevBtn.addEventListener('click', () => {
|
295 |
+
if (currentCardIndex > 0) {
|
296 |
+
currentCardIndex--;
|
297 |
+
updateCarousel();
|
298 |
+
}
|
299 |
+
});
|
300 |
+
nextBtn.addEventListener('click', () => {
|
301 |
+
const cards = document.querySelectorAll('.card');
|
302 |
+
if (currentCardIndex < cards.length - 1) {
|
303 |
+
currentCardIndex++;
|
304 |
+
updateCarousel();
|
305 |
+
}
|
306 |
+
});
|
307 |
+
// ----------------- File Attachment Handling -----------------
|
308 |
+
const attachBtn = document.getElementById('attachBtn');
|
309 |
+
const fileInput = document.getElementById('fileInput');
|
310 |
+
const fileAttachments = document.getElementById('fileAttachments');
|
311 |
+
attachBtn.addEventListener('click', () => {
|
312 |
+
fileInput.click();
|
313 |
+
});
|
314 |
+
fileInput.addEventListener('change', () => {
|
315 |
+
for (const file of fileInput.files) {
|
316 |
+
attachedFiles.push(file);
|
317 |
+
}
|
318 |
+
fileInput.value = "";
|
319 |
+
updateFileAttachments();
|
320 |
+
});
|
321 |
+
function updateFileAttachments() {
|
322 |
+
fileAttachments.innerHTML = "";
|
323 |
+
attachedFiles.forEach((file, index) => {
|
324 |
+
const fileDiv = document.createElement("div");
|
325 |
+
fileDiv.className = "file-item";
|
326 |
+
fileDiv.innerHTML = `<span>${file.name}</span> <button data-index="${index}">×</button>`;
|
327 |
+
fileAttachments.appendChild(fileDiv);
|
328 |
+
});
|
329 |
+
document.querySelectorAll(".file-item button").forEach(btn => {
|
330 |
+
btn.addEventListener("click", (e) => {
|
331 |
+
const idx = e.target.getAttribute("data-index");
|
332 |
+
attachedFiles.splice(idx, 1);
|
333 |
+
updateFileAttachments();
|
334 |
+
});
|
335 |
+
});
|
336 |
+
}
|
337 |
+
// ----------------- Summary Overlay -----------------
|
338 |
+
const summaryOverlay = document.getElementById('summaryOverlay');
|
339 |
+
const closeSummaryBtn = document.getElementById('closeSummaryBtn');
|
340 |
+
const summaryContent = document.getElementById('summaryContent');
|
341 |
+
const downloadSummaryBtn = document.getElementById('downloadSummary');
|
342 |
+
closeSummaryBtn.addEventListener('click', () => {
|
343 |
+
summaryOverlay.classList.remove('active');
|
344 |
+
});
|
345 |
+
downloadSummaryBtn.addEventListener('click', () => {
|
346 |
+
const blob = new Blob([sessions[currentSessionIndex].summary], { type: "text/markdown" });
|
347 |
+
const url = URL.createObjectURL(blob);
|
348 |
+
const a = document.createElement("a");
|
349 |
+
a.href = url;
|
350 |
+
a.download = "summary.md";
|
351 |
+
a.click();
|
352 |
+
URL.revokeObjectURL(url);
|
353 |
+
});
|
354 |
+
// ----------------- Settings Overlay -----------------
|
355 |
+
const settingsOverlay = document.getElementById('settingsOverlay');
|
356 |
+
const closeSettingsBtn = document.getElementById('closeSettingsBtn');
|
357 |
+
const temperatureInput = document.getElementById('temperature');
|
358 |
+
const temperatureValue = document.getElementById('temperatureValue');
|
359 |
+
const maxTokensInput = document.getElementById('maxTokens');
|
360 |
+
const personaSelect = document.getElementById('persona');
|
361 |
+
const saveSettingsBtn = document.getElementById('saveSettings');
|
362 |
+
closeSettingsBtn.addEventListener('click', () => {
|
363 |
+
settingsOverlay.classList.remove('active');
|
364 |
+
});
|
365 |
+
temperatureInput.addEventListener('input', () => {
|
366 |
+
temperatureValue.textContent = temperatureInput.value;
|
367 |
+
});
|
368 |
+
saveSettingsBtn.addEventListener('click', () => {
|
369 |
+
sessions[currentSessionIndex].settings = {
|
370 |
+
temperature: parseFloat(temperatureInput.value),
|
371 |
+
maxTokens: parseInt(maxTokensInput.value),
|
372 |
+
persona: personaSelect.value
|
373 |
+
};
|
374 |
+
console.log('Session settings saved:', sessions[currentSessionIndex].settings);
|
375 |
+
settingsOverlay.classList.remove('active');
|
376 |
+
});
|
377 |
+
// ----------------- Title Editing -----------------
|
378 |
+
const editTitleBtn = document.getElementById('editTitleBtn');
|
379 |
+
editTitleBtn.addEventListener('click', () => {
|
380 |
+
const currentTitle = sessions[currentSessionIndex].title;
|
381 |
+
const newTitle = prompt("Enter new chat title:", currentTitle);
|
382 |
+
if (newTitle !== null && newTitle.trim() !== "") {
|
383 |
+
sessions[currentSessionIndex].title = newTitle.trim();
|
384 |
+
document.getElementById('chatTitle').textContent = newTitle.trim();
|
385 |
+
}
|
386 |
+
});
|
387 |
+
// ----------------- Auto-Dismiss Overlays -----------------
|
388 |
+
document.addEventListener('click', (e) => {
|
389 |
+
if (summaryOverlay.classList.contains('active') && !summaryOverlay.contains(e.target)) {
|
390 |
+
summaryOverlay.classList.remove('active');
|
391 |
+
}
|
392 |
+
if (settingsOverlay.classList.contains('active') && !settingsOverlay.contains(e.target)) {
|
393 |
+
settingsOverlay.classList.remove('active');
|
394 |
+
}
|
395 |
+
});
|
396 |
+
// ----------------- Global Keyboard Navigation -----------------
|
397 |
+
document.addEventListener('keydown', (e) => {
|
398 |
+
if (document.activeElement !== chatInput) {
|
399 |
+
if (e.key === 'ArrowLeft' && currentCardIndex > 0) {
|
400 |
+
currentCardIndex--;
|
401 |
+
updateCarousel();
|
402 |
+
} else if (e.key === 'ArrowRight') {
|
403 |
+
const cards = document.querySelectorAll('.card');
|
404 |
+
if (currentCardIndex < cards.length - 1) {
|
405 |
+
currentCardIndex++;
|
406 |
+
updateCarousel();
|
407 |
+
}
|
408 |
+
}
|
409 |
+
}
|
410 |
+
});
|
411 |
+
// ----------------- Hamburger Toggle -----------------
|
412 |
+
const hamburgerBtn = document.getElementById('hamburgerBtn');
|
413 |
+
const navBar = document.getElementById('navBar');
|
414 |
+
hamburgerBtn.addEventListener('click', (e) => {
|
415 |
+
e.stopPropagation();
|
416 |
+
navBar.classList.toggle('hidden');
|
417 |
+
updateHamburgerPosition();
|
418 |
+
});
|
419 |
+
|
420 |
+
// ----------------- Custom Button Event Listeners -----------------
|
421 |
+
const customBtn1 = document.getElementById('customBtn1');
|
422 |
+
const customBtn2 = document.getElementById('customBtn2');
|
423 |
+
|
424 |
+
customBtn1.addEventListener('click', (e) => {
|
425 |
+
e.stopPropagation();
|
426 |
+
// Open the summary overlay for the current session
|
427 |
+
document.getElementById('summaryContent').innerHTML = marked.parse(sessions[currentSessionIndex].summary);
|
428 |
+
summaryOverlay.classList.add('active');
|
429 |
+
settingsOverlay.classList.remove('active');
|
430 |
+
});
|
431 |
+
|
432 |
+
customBtn2.addEventListener('click', (e) => {
|
433 |
+
e.stopPropagation();
|
434 |
+
// Open the settings overlay for the current session and fill in the fields
|
435 |
+
const settings = sessions[currentSessionIndex].settings;
|
436 |
+
temperatureInput.value = settings.temperature;
|
437 |
+
temperatureValue.textContent = settings.temperature;
|
438 |
+
maxTokensInput.value = settings.maxTokens;
|
439 |
+
personaSelect.value = settings.persona;
|
440 |
+
settingsOverlay.classList.add('active');
|
441 |
+
summaryOverlay.classList.remove('active');
|
442 |
+
});
|
443 |
+
|
444 |
+
// Get reference to the new select element
|
445 |
+
const modelSelect = document.getElementById('modelSelect');
|
446 |
+
|
447 |
+
function openSettingsForCurrentSession() {
|
448 |
+
const settings = sessions[currentSessionIndex].settings;
|
449 |
+
// Existing lines for temperature, maxTokens, persona...
|
450 |
+
modelSelect.value = settings.model; // Populate the dropdown with the current model
|
451 |
+
}
|
452 |
+
|
453 |
+
// When opening the settings overlay:
|
454 |
+
customBtn2.addEventListener('click', (e) => {
|
455 |
+
e.stopPropagation();
|
456 |
+
// ...
|
457 |
+
openSettingsForCurrentSession(); // load session settings into the UI
|
458 |
+
settingsOverlay.classList.add('active');
|
459 |
+
summaryOverlay.classList.remove('active');
|
460 |
+
});
|
461 |
+
|
462 |
+
// Saving:
|
463 |
+
saveSettingsBtn.addEventListener('click', () => {
|
464 |
+
const sessionSettings = sessions[currentSessionIndex].settings;
|
465 |
+
// Existing lines for temperature, maxTokens, persona...
|
466 |
+
sessionSettings.model = modelSelect.value; // Save the selected model
|
467 |
+
|
468 |
+
console.log('Session settings saved:', sessions[currentSessionIndex].settings);
|
469 |
+
settingsOverlay.classList.remove('active');
|
470 |
+
});
|
471 |
+
|
472 |
+
async function callLLMStream(conversation) {
|
473 |
+
const session = sessions[currentSessionIndex];
|
474 |
+
const { model, temperature, maxTokens } = session.settings;
|
475 |
+
|
476 |
+
if (model.startsWith("gpt-4o")) {
|
477 |
+
// Call OpenAI endpoint
|
478 |
+
return callOpenAIStream(conversation, model, temperature, maxTokens);
|
479 |
+
} else if (model.startsWith("claude")) {
|
480 |
+
// Call Anthropic endpoint
|
481 |
+
return callAnthropicStream(conversation, model, temperature, maxTokens);
|
482 |
+
} else if (model.startsWith("gemini")) {
|
483 |
+
// Call Google endpoint
|
484 |
+
return callGoogleStream(conversation, model, temperature, maxTokens);
|
485 |
+
} else {
|
486 |
+
throw new Error("Unsupported model: " + model);
|
487 |
+
}
|
488 |
+
}
|
489 |
+
|
490 |
+
async function callOpenAIStream(conversation) {
|
491 |
+
const response = await fetch("http://127.0.0.1:8000/openai_stream", {
|
492 |
+
method: "POST",
|
493 |
+
headers: {
|
494 |
+
"Content-Type": "application/json"
|
495 |
+
// Remove the Authorization header since the Python backend handles the API key.
|
496 |
+
},
|
497 |
+
body: JSON.stringify({
|
498 |
+
conversation: conversation,
|
499 |
+
temperature: sessions[currentSessionIndex].settings.temperature,
|
500 |
+
max_tokens: sessions[currentSessionIndex].settings.maxTokens,
|
501 |
+
model: MODEL_NAME
|
502 |
+
})
|
503 |
+
});
|
504 |
+
const reader = response.body.getReader();
|
505 |
+
const decoder = new TextDecoder("utf-8");
|
506 |
+
let done = false;
|
507 |
+
let aiMessage = "";
|
508 |
+
while (!done) {
|
509 |
+
const { value, done: doneReading } = await reader.read();
|
510 |
+
done = doneReading;
|
511 |
+
const chunk = decoder.decode(value);
|
512 |
+
const lines = chunk.split("\n").filter(line => line.trim().startsWith("data:"));
|
513 |
+
for (const line of lines) {
|
514 |
+
const dataStr = line.replace(/^data:\s*/, "");
|
515 |
+
if (dataStr === "[DONE]") {
|
516 |
+
done = true;
|
517 |
+
break;
|
518 |
+
}
|
519 |
+
try {
|
520 |
+
// Parse the JSON returned by the Python backend.
|
521 |
+
const parsed = JSON.parse(dataStr);
|
522 |
+
// Assuming the payload structure is the same as OpenAI's response.
|
523 |
+
const delta = parsed.choices[0].delta.content;
|
524 |
+
if (delta) {
|
525 |
+
aiMessage += delta;
|
526 |
+
updateLastMessage(aiMessage);
|
527 |
+
}
|
528 |
+
} catch (err) {
|
529 |
+
console.error("Stream parsing error:", err);
|
530 |
+
}
|
531 |
+
}
|
532 |
+
}
|
533 |
+
return aiMessage;
|
534 |
+
}
|
535 |
+
|
536 |
+
|
537 |
+
async function callAnthropicStream(conversation, model, temperature, maxTokens) {
|
538 |
+
model = model.toLowerCase().replace(/\s+/g, '-').replace(/\./g, '-');
|
539 |
+
console.log(`Calling Anthropic API with model: ${model}`);
|
540 |
+
|
541 |
+
const response = await fetch("http://127.0.0.1:8000/anthropic_stream", {
|
542 |
+
method: "POST",
|
543 |
+
headers: {
|
544 |
+
"Content-Type": "application/json"
|
545 |
+
},
|
546 |
+
body: JSON.stringify({
|
547 |
+
messages: conversation,
|
548 |
+
temperature: temperature,
|
549 |
+
max_tokens: maxTokens,
|
550 |
+
model: model + "-latest"
|
551 |
+
})
|
552 |
+
});
|
553 |
+
|
554 |
+
const reader = response.body.getReader();
|
555 |
+
const decoder = new TextDecoder("utf-8");
|
556 |
+
let done = false;
|
557 |
+
let aiMessage = "";
|
558 |
+
|
559 |
+
while (!done) {
|
560 |
+
const { value, done: doneReading } = await reader.read();
|
561 |
+
done = doneReading;
|
562 |
+
const chunk = decoder.decode(value);
|
563 |
+
const lines = chunk.split("\n").filter(line => line.trim().startsWith("data:"));
|
564 |
+
|
565 |
+
for (const line of lines) {
|
566 |
+
const dataStr = line.replace(/^data:\s*/, "");
|
567 |
+
if (dataStr === "[DONE]") {
|
568 |
+
done = true;
|
569 |
+
break;
|
570 |
+
}
|
571 |
+
|
572 |
+
try {
|
573 |
+
const parsed = JSON.parse(dataStr);
|
574 |
+
const delta = parsed.choices[0].delta.content;
|
575 |
+
if (delta) {
|
576 |
+
aiMessage += delta;
|
577 |
+
updateLastMessage(aiMessage);
|
578 |
+
}
|
579 |
+
} catch (err) {
|
580 |
+
console.error("Anthropic stream parsing error:", err);
|
581 |
+
}
|
582 |
+
}
|
583 |
+
}
|
584 |
+
|
585 |
+
return aiMessage;
|
586 |
+
|
587 |
+
}
|
588 |
+
|
589 |
+
async function callGoogleStream(conversation, model, temperature, maxTokens) {
|
590 |
+
// Convert conversation messages to Gemini's "contents" format.
|
591 |
+
model = model.toLowerCase().replace(/\s+/g, '-');
|
592 |
+
console.log(model);
|
593 |
+
const response = await fetch("http://127.0.0.1:8000/gemini_stream", {
|
594 |
+
method: "POST",
|
595 |
+
headers: {
|
596 |
+
"Content-Type": "application/json"
|
597 |
+
},
|
598 |
+
body: JSON.stringify({
|
599 |
+
messages: conversation,
|
600 |
+
temperature: temperature,
|
601 |
+
max_tokens: maxTokens,
|
602 |
+
model: model
|
603 |
+
})
|
604 |
+
});
|
605 |
+
|
606 |
+
const reader = response.body.getReader();
|
607 |
+
const decoder = new TextDecoder("utf-8");
|
608 |
+
let done = false;
|
609 |
+
let aiMessage = "";
|
610 |
+
|
611 |
+
while (!done) {
|
612 |
+
const { value, done: doneReading } = await reader.read();
|
613 |
+
done = doneReading;
|
614 |
+
const chunk = decoder.decode(value);
|
615 |
+
const lines = chunk.split("\n").filter(line => line.trim().startsWith("data:"));
|
616 |
+
|
617 |
+
for (const line of lines) {
|
618 |
+
const dataStr = line.replace(/^data:\s*/, "");
|
619 |
+
if (dataStr === "[DONE]") {
|
620 |
+
done = true;
|
621 |
+
break;
|
622 |
+
}
|
623 |
+
|
624 |
+
try {
|
625 |
+
const parsed = JSON.parse(dataStr);
|
626 |
+
const delta = parsed.choices[0].delta.content;
|
627 |
+
if (delta) {
|
628 |
+
aiMessage += delta;
|
629 |
+
updateLastMessage(aiMessage);
|
630 |
+
}
|
631 |
+
} catch (err) {
|
632 |
+
console.error("Gemini stream parsing error:", err);
|
633 |
+
}
|
634 |
+
}
|
635 |
+
}
|
636 |
+
|
637 |
+
return aiMessage;
|
638 |
+
|
639 |
+
}
|
640 |
+
|
641 |
+
|
642 |
+
// ----------------- Initialization -----------------
|
643 |
+
initSessions();
|
644 |
+
updateHamburgerPosition();
|
standalone/send.svg
ADDED
|
standalone/server/main.py
ADDED
@@ -0,0 +1,405 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
from fastapi import FastAPI, Request, HTTPException
|
3 |
+
from fastapi.responses import StreamingResponse
|
4 |
+
from fastapi.middleware.cors import CORSMiddleware
|
5 |
+
import httpx
|
6 |
+
|
7 |
+
app = FastAPI()
|
8 |
+
|
9 |
+
# Allow all origins for testing (adjust for production)
|
10 |
+
app.add_middleware(
|
11 |
+
CORSMiddleware,
|
12 |
+
allow_origins=["*"],
|
13 |
+
allow_credentials=True,
|
14 |
+
allow_methods=["*"],
|
15 |
+
allow_headers=["*"],
|
16 |
+
)
|
17 |
+
|
18 |
+
# Replace these with secure methods in production
|
19 |
+
import os
|
20 |
+
|
21 |
+
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
|
22 |
+
ANTHROPIC_API_KEY = os.environ.get("ANTHROPIC_API_KEY")
|
23 |
+
GOOGLE_API_KEY = os.environ.get("GOOGLE_API_KEY")
|
24 |
+
MODEL_NAME = "gpt-4o-mini"
|
25 |
+
|
26 |
+
@app.post("/openai_stream")
|
27 |
+
async def openai_stream(request: Request):
|
28 |
+
try:
|
29 |
+
body = await request.json()
|
30 |
+
except Exception as e:
|
31 |
+
raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
|
32 |
+
|
33 |
+
conversation = body.get("conversation")
|
34 |
+
if not conversation:
|
35 |
+
raise HTTPException(status_code=400, detail="Missing 'conversation' in payload")
|
36 |
+
|
37 |
+
temperature = body.get("temperature", 0.7)
|
38 |
+
max_tokens = body.get("max_tokens", 256)
|
39 |
+
model = body.get("model", MODEL_NAME)
|
40 |
+
|
41 |
+
# Using OpenAI's SDK instead of direct API calls
|
42 |
+
from openai import AsyncOpenAI
|
43 |
+
|
44 |
+
# Initialize the client with the API key
|
45 |
+
client = AsyncOpenAI(api_key=OPENAI_API_KEY)
|
46 |
+
|
47 |
+
async def event_generator():
|
48 |
+
try:
|
49 |
+
print(f"Starting stream for model: {model}, temperature: {temperature}, max_tokens: {max_tokens}")
|
50 |
+
line_count = 0
|
51 |
+
|
52 |
+
# Use the SDK to create a streaming completion
|
53 |
+
stream = await client.chat.completions.create(
|
54 |
+
model=model,
|
55 |
+
messages=conversation,
|
56 |
+
temperature=temperature,
|
57 |
+
max_tokens=max_tokens,
|
58 |
+
stream=True
|
59 |
+
)
|
60 |
+
|
61 |
+
async for chunk in stream:
|
62 |
+
if chunk.choices and chunk.choices[0].delta.content is not None:
|
63 |
+
content = chunk.choices[0].delta.content
|
64 |
+
line_count += 1
|
65 |
+
if line_count % 10 == 0:
|
66 |
+
print(f"Processed {line_count} stream chunks")
|
67 |
+
|
68 |
+
# Format the response in the same way as before
|
69 |
+
response_json = json.dumps({
|
70 |
+
"choices": [{"delta": {"content": content}}]
|
71 |
+
})
|
72 |
+
yield f"data: {response_json}\n\n"
|
73 |
+
|
74 |
+
# Send the [DONE] marker
|
75 |
+
print("Stream completed successfully")
|
76 |
+
yield "data: [DONE]\n\n"
|
77 |
+
|
78 |
+
except Exception as e:
|
79 |
+
print(f"Error during streaming: {str(e)}")
|
80 |
+
yield f"data: {{\"error\": \"{str(e)}\"}}\n\n"
|
81 |
+
finally:
|
82 |
+
print(f"Stream ended after processing {line_count if 'line_count' in locals() else 0} chunks")
|
83 |
+
|
84 |
+
print("Returning StreamingResponse to client")
|
85 |
+
return StreamingResponse(event_generator(), media_type="text/event-stream")
|
86 |
+
|
87 |
+
@app.post("/gemini_stream")
|
88 |
+
async def gemini_stream(request: Request):
|
89 |
+
"""
|
90 |
+
Stream responses from Google's Gemini model using the Gemini SDK.
|
91 |
+
"""
|
92 |
+
body = await request.json()
|
93 |
+
conversation = body.get("messages", [])
|
94 |
+
temperature = body.get("temperature", 0.7)
|
95 |
+
max_tokens = body.get("max_tokens", 256)
|
96 |
+
model = body.get("model", "gemini-pro") # Default to gemini-pro model
|
97 |
+
|
98 |
+
# Using Google's Generative AI SDK
|
99 |
+
import google.generativeai as genai
|
100 |
+
from google.generativeai.types import HarmCategory, HarmBlockThreshold
|
101 |
+
|
102 |
+
# Initialize the client with the API key
|
103 |
+
genai.configure(api_key=GOOGLE_API_KEY)
|
104 |
+
|
105 |
+
# Convert OpenAI message format to Gemini format
|
106 |
+
gemini_messages = []
|
107 |
+
for msg in conversation:
|
108 |
+
role = "user" if msg["role"] == "user" else "model"
|
109 |
+
gemini_messages.append({"role": role, "parts": [msg["content"]]})
|
110 |
+
|
111 |
+
async def event_generator():
|
112 |
+
try:
|
113 |
+
print(f"Starting Gemini stream for model: {model}, temperature: {temperature}, max_tokens: {max_tokens}")
|
114 |
+
line_count = 0
|
115 |
+
|
116 |
+
# Create a Gemini model instance
|
117 |
+
gemini_model = genai.GenerativeModel(
|
118 |
+
model_name=model,
|
119 |
+
generation_config={
|
120 |
+
"temperature": temperature,
|
121 |
+
"max_output_tokens": max_tokens,
|
122 |
+
"top_p": 0.95,
|
123 |
+
},
|
124 |
+
safety_settings={
|
125 |
+
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
|
126 |
+
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
|
127 |
+
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
|
128 |
+
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
|
129 |
+
}
|
130 |
+
)
|
131 |
+
|
132 |
+
# Start the streaming response
|
133 |
+
response = gemini_model.generate_content(
|
134 |
+
gemini_messages,
|
135 |
+
stream=True
|
136 |
+
)
|
137 |
+
|
138 |
+
# Fix: Use synchronous iteration instead of async for
|
139 |
+
for chunk in response:
|
140 |
+
if hasattr(chunk, 'text') and chunk.text:
|
141 |
+
content = chunk.text
|
142 |
+
line_count += 1
|
143 |
+
if line_count % 10 == 0:
|
144 |
+
print(f"Processed {line_count} Gemini stream chunks")
|
145 |
+
|
146 |
+
# Format the response to match OpenAI format for client compatibility
|
147 |
+
response_json = json.dumps({
|
148 |
+
"choices": [{"delta": {"content": content}}]
|
149 |
+
})
|
150 |
+
yield f"data: {response_json}\n\n"
|
151 |
+
|
152 |
+
# Send the [DONE] marker
|
153 |
+
print("Gemini stream completed successfully")
|
154 |
+
yield "data: [DONE]\n\n"
|
155 |
+
|
156 |
+
except Exception as e:
|
157 |
+
print(f"Error during Gemini streaming: {str(e)}")
|
158 |
+
yield f"data: {{\"error\": \"{str(e)}\"}}\n\n"
|
159 |
+
finally:
|
160 |
+
print(f"Gemini stream ended after processing {line_count if 'line_count' in locals() else 0} chunks")
|
161 |
+
|
162 |
+
print("Returning StreamingResponse from Gemini to client")
|
163 |
+
return StreamingResponse(event_generator(), media_type="text/event-stream")
|
164 |
+
|
165 |
+
@app.post("/anthropic_stream")
|
166 |
+
async def anthropic_stream(request: Request):
|
167 |
+
"""
|
168 |
+
Stream responses from Anthropic's Claude models.
|
169 |
+
"""
|
170 |
+
print("Received request for Anthropic streaming")
|
171 |
+
|
172 |
+
# Parse the request body
|
173 |
+
body = await request.json()
|
174 |
+
messages = body.get("messages", [])
|
175 |
+
temperature = body.get("temperature", 0.7)
|
176 |
+
max_tokens = body.get("max_tokens", 1024)
|
177 |
+
model = body.get("model", "claude-3-opus-20240229")
|
178 |
+
|
179 |
+
# Load Anthropic API key from environment
|
180 |
+
anthropic_api_key = ANTHROPIC_API_KEY #os.environ.get("ANTHROPIC_API_KEY")
|
181 |
+
if not anthropic_api_key:
|
182 |
+
return JSONResponse(
|
183 |
+
status_code=500,
|
184 |
+
content={"error": "ANTHROPIC_API_KEY not found in environment variables"}
|
185 |
+
)
|
186 |
+
|
187 |
+
# Convert messages to Anthropic format
|
188 |
+
anthropic_messages = []
|
189 |
+
for msg in messages:
|
190 |
+
role = "assistant" if msg.get("role") == "assistant" else "user"
|
191 |
+
content = msg.get("content", "")
|
192 |
+
anthropic_messages.append({"role": role, "content": content})
|
193 |
+
|
194 |
+
line_count = 0
|
195 |
+
|
196 |
+
async def event_generator():
|
197 |
+
try:
|
198 |
+
import anthropic
|
199 |
+
|
200 |
+
# Initialize Anthropic client
|
201 |
+
client = anthropic.Anthropic(api_key=anthropic_api_key)
|
202 |
+
|
203 |
+
# Start the streaming response
|
204 |
+
with client.messages.stream(
|
205 |
+
model=model,
|
206 |
+
messages=anthropic_messages,
|
207 |
+
max_tokens=max_tokens,
|
208 |
+
temperature=temperature
|
209 |
+
) as stream:
|
210 |
+
for chunk in stream:
|
211 |
+
if hasattr(chunk, 'delta') and hasattr(chunk.delta, 'text') and chunk.delta.text:
|
212 |
+
content = chunk.delta.text
|
213 |
+
nonlocal line_count
|
214 |
+
line_count += 1
|
215 |
+
if line_count % 10 == 0:
|
216 |
+
print(f"Processed {line_count} Anthropic stream chunks")
|
217 |
+
|
218 |
+
# Format the response to match OpenAI format for client compatibility
|
219 |
+
response_json = json.dumps({
|
220 |
+
"choices": [{"delta": {"content": content}}]
|
221 |
+
})
|
222 |
+
yield f"data: {response_json}\n\n"
|
223 |
+
|
224 |
+
# Send the [DONE] marker
|
225 |
+
print("Anthropic stream completed successfully")
|
226 |
+
yield "data: [DONE]\n\n"
|
227 |
+
|
228 |
+
except Exception as e:
|
229 |
+
print(f"Error during Anthropic streaming: {str(e)}")
|
230 |
+
yield f"data: {{\"error\": \"{str(e)}\"}}\n\n"
|
231 |
+
finally:
|
232 |
+
print(f"Anthropic stream ended after processing {line_count if 'line_count' in locals() else 0} chunks")
|
233 |
+
|
234 |
+
print("Returning StreamingResponse from Anthropic to client")
|
235 |
+
return StreamingResponse(event_generator(), media_type="text/event-stream")
|
236 |
+
|
237 |
+
@app.post("/summarize_openai")
|
238 |
+
async def summarize_openai(request: Request):
|
239 |
+
try:
|
240 |
+
body = await request.json()
|
241 |
+
except Exception as e:
|
242 |
+
raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
|
243 |
+
|
244 |
+
previous_summary = body.get("previous_summary", "")
|
245 |
+
latest_conversation = body.get("latest_conversation", "")
|
246 |
+
persona = body.get("persona", "helpful assistant")
|
247 |
+
temperature = body.get("temperature", 0.7)
|
248 |
+
max_tokens = body.get("max_tokens", 1024)
|
249 |
+
model = body.get("model", MODEL_NAME)
|
250 |
+
|
251 |
+
# Load the prompt from prompts.toml
|
252 |
+
import tomli
|
253 |
+
with open("configs/prompts.toml", "rb") as f:
|
254 |
+
prompts_config = tomli.load(f)
|
255 |
+
|
256 |
+
# Get the prompt and system prompt
|
257 |
+
prompt_template = prompts_config["summarization"]["prompt"]
|
258 |
+
system_prompt = prompts_config["summarization"]["system_prompt"]
|
259 |
+
|
260 |
+
# Replace variables in the prompt
|
261 |
+
prompt = prompt_template.replace("$previous_summary", previous_summary).replace("$latest_conversation", latest_conversation)
|
262 |
+
system_prompt = system_prompt.replace("$persona", persona)
|
263 |
+
|
264 |
+
# Using OpenAI's SDK
|
265 |
+
from openai import AsyncOpenAI
|
266 |
+
|
267 |
+
# Initialize the client with the API key
|
268 |
+
client = AsyncOpenAI(api_key=OPENAI_API_KEY)
|
269 |
+
|
270 |
+
try:
|
271 |
+
print(f"Starting OpenAI summarization for model: {model}")
|
272 |
+
|
273 |
+
# Use the SDK to create a completion
|
274 |
+
response = await client.chat.completions.create(
|
275 |
+
model=model,
|
276 |
+
messages=[
|
277 |
+
{"role": "system", "content": system_prompt},
|
278 |
+
{"role": "user", "content": prompt}
|
279 |
+
],
|
280 |
+
temperature=temperature,
|
281 |
+
max_tokens=max_tokens
|
282 |
+
)
|
283 |
+
|
284 |
+
summary = response.choices[0].message.content
|
285 |
+
print("OpenAI summarization completed successfully")
|
286 |
+
|
287 |
+
return {"summary": summary}
|
288 |
+
|
289 |
+
except Exception as e:
|
290 |
+
print(f"Error during OpenAI summarization: {str(e)}")
|
291 |
+
raise HTTPException(status_code=500, detail=f"Error during summarization: {str(e)}")
|
292 |
+
|
293 |
+
@app.post("/summarize_anthropic")
|
294 |
+
async def summarize_anthropic(request: Request):
|
295 |
+
try:
|
296 |
+
body = await request.json()
|
297 |
+
except Exception as e:
|
298 |
+
raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
|
299 |
+
|
300 |
+
previous_summary = body.get("previous_summary", "")
|
301 |
+
latest_conversation = body.get("latest_conversation", "")
|
302 |
+
persona = body.get("persona", "helpful assistant")
|
303 |
+
temperature = body.get("temperature", 0.7)
|
304 |
+
max_tokens = body.get("max_tokens", 1024)
|
305 |
+
model = body.get("model", "claude-3-opus-20240229")
|
306 |
+
|
307 |
+
# Load the prompt from prompts.toml
|
308 |
+
import tomli
|
309 |
+
with open("configs/prompts.toml", "rb") as f:
|
310 |
+
prompts_config = tomli.load(f)
|
311 |
+
|
312 |
+
# Get the prompt and system prompt
|
313 |
+
prompt_template = prompts_config["summarization"]["prompt"]
|
314 |
+
system_prompt = prompts_config["summarization"]["system_prompt"]
|
315 |
+
|
316 |
+
# Replace variables in the prompt
|
317 |
+
prompt = prompt_template.replace("$previous_summary", previous_summary).replace("$latest_conversation", latest_conversation)
|
318 |
+
system_prompt = system_prompt.replace("$persona", persona)
|
319 |
+
|
320 |
+
try:
|
321 |
+
import anthropic
|
322 |
+
|
323 |
+
# Initialize Anthropic client
|
324 |
+
client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)
|
325 |
+
|
326 |
+
print(f"Starting Anthropic summarization for model: {model}")
|
327 |
+
|
328 |
+
# Create the response
|
329 |
+
response = client.messages.create(
|
330 |
+
model=model,
|
331 |
+
messages=[
|
332 |
+
{"role": "user", "content": prompt}
|
333 |
+
],
|
334 |
+
system=system_prompt,
|
335 |
+
max_tokens=max_tokens,
|
336 |
+
temperature=temperature
|
337 |
+
)
|
338 |
+
|
339 |
+
summary = response.content[0].text
|
340 |
+
print("Anthropic summarization completed successfully")
|
341 |
+
|
342 |
+
return {"summary": summary}
|
343 |
+
|
344 |
+
except Exception as e:
|
345 |
+
print(f"Error during Anthropic summarization: {str(e)}")
|
346 |
+
raise HTTPException(status_code=500, detail=f"Error during summarization: {str(e)}")
|
347 |
+
|
348 |
+
@app.post("/summarize_google")
|
349 |
+
async def summarize_google(request: Request):
|
350 |
+
try:
|
351 |
+
body = await request.json()
|
352 |
+
except Exception as e:
|
353 |
+
raise HTTPException(status_code=400, detail="Invalid JSON payload") from e
|
354 |
+
|
355 |
+
previous_summary = body.get("previous_summary", "")
|
356 |
+
latest_conversation = body.get("latest_conversation", "")
|
357 |
+
persona = body.get("persona", "helpful assistant")
|
358 |
+
temperature = body.get("temperature", 0.7)
|
359 |
+
max_tokens = body.get("max_tokens", 1024)
|
360 |
+
model = body.get("model", "gemini-1.5-pro")
|
361 |
+
|
362 |
+
# Load the prompt from prompts.toml
|
363 |
+
import tomli
|
364 |
+
with open("configs/prompts.toml", "rb") as f:
|
365 |
+
prompts_config = tomli.load(f)
|
366 |
+
|
367 |
+
# Get the prompt and system prompt
|
368 |
+
prompt_template = prompts_config["summarization"]["prompt"]
|
369 |
+
system_prompt = prompts_config["summarization"]["system_prompt"]
|
370 |
+
|
371 |
+
# Replace variables in the prompt
|
372 |
+
prompt = prompt_template.replace("$previous_summary", previous_summary).replace("$latest_conversation", latest_conversation)
|
373 |
+
system_prompt = system_prompt.replace("$persona", persona)
|
374 |
+
|
375 |
+
try:
|
376 |
+
import google.generativeai as genai
|
377 |
+
|
378 |
+
# Configure the Google API
|
379 |
+
genai.configure(api_key=GOOGLE_API_KEY)
|
380 |
+
|
381 |
+
# Initialize the model
|
382 |
+
model_obj = genai.GenerativeModel(model_name=model)
|
383 |
+
|
384 |
+
print(f"Starting Google summarization for model: {model}")
|
385 |
+
|
386 |
+
# Combine system prompt and user prompt for Google's API
|
387 |
+
combined_prompt = f"{system_prompt}\n\n{prompt}"
|
388 |
+
|
389 |
+
# Generate the response
|
390 |
+
response = model_obj.generate_content(
|
391 |
+
contents=combined_prompt,
|
392 |
+
generation_config=genai.types.GenerationConfig(
|
393 |
+
temperature=temperature,
|
394 |
+
max_output_tokens=max_tokens
|
395 |
+
)
|
396 |
+
)
|
397 |
+
|
398 |
+
summary = response.text
|
399 |
+
print("Google summarization completed successfully")
|
400 |
+
|
401 |
+
return {"summary": summary}
|
402 |
+
|
403 |
+
except Exception as e:
|
404 |
+
print(f"Error during Google summarization: {str(e)}")
|
405 |
+
raise HTTPException(status_code=500, detail=f"Error during summarization: {str(e)}")
|
standalone/settings.svg
ADDED
|
standalone/style.css
ADDED
@@ -0,0 +1,709 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
:root {
|
2 |
+
--primary-color: #4a90e2;
|
3 |
+
--primary-dark: #4178c0;
|
4 |
+
--accent-color: #50e3c2;
|
5 |
+
--background-color: #f9f9f9;
|
6 |
+
--body-bg: #f0f2f5;
|
7 |
+
--text-color: #333;
|
8 |
+
--light-shadow: rgba(0, 0, 0, 0.1);
|
9 |
+
--nav-width: 320px;
|
10 |
+
--border-radius: 12px;
|
11 |
+
}
|
12 |
+
/* Global Styles */
|
13 |
+
* {
|
14 |
+
box-sizing: border-box;
|
15 |
+
margin: 0;
|
16 |
+
padding: 0;
|
17 |
+
}
|
18 |
+
body {
|
19 |
+
font-family: 'Poppins', sans-serif;
|
20 |
+
background: var(--body-bg);
|
21 |
+
overflow: hidden;
|
22 |
+
color: var(--text-color);
|
23 |
+
}
|
24 |
+
/* Hamburger Button */
|
25 |
+
#hamburgerBtn {
|
26 |
+
background: var(--primary-color);
|
27 |
+
color: #fff;
|
28 |
+
border: none;
|
29 |
+
padding: 5px 10px;
|
30 |
+
font-size: 1.5em;
|
31 |
+
border-radius: 6px;
|
32 |
+
cursor: pointer;
|
33 |
+
transition: background 0.3s;
|
34 |
+
}
|
35 |
+
#hamburgerBtn:hover {
|
36 |
+
background: var(--primary-dark);
|
37 |
+
}
|
38 |
+
.chat-header .chat-hamburger {
|
39 |
+
margin-top: 0;
|
40 |
+
margin-right: 10px;
|
41 |
+
}
|
42 |
+
/* Navigation Header (New placement for New Chat button) */
|
43 |
+
.nav-header {
|
44 |
+
display: flex;
|
45 |
+
align-items: center;
|
46 |
+
gap: 10px;
|
47 |
+
margin-bottom: 20px;
|
48 |
+
}
|
49 |
+
/* App Container */
|
50 |
+
.app-container {
|
51 |
+
display: flex;
|
52 |
+
height: 100vh;
|
53 |
+
width: 100vw;
|
54 |
+
transition: all 0.3s ease;
|
55 |
+
}
|
56 |
+
/* Left Navigation Bar */
|
57 |
+
.nav-bar {
|
58 |
+
width: var(--nav-width);
|
59 |
+
background: #fff;
|
60 |
+
padding: 24px;
|
61 |
+
box-shadow: 2px 0 16px var(--light-shadow);
|
62 |
+
overflow-y: auto;
|
63 |
+
transition: transform 0.3s ease, width 0.3s ease, padding 0.3s ease;
|
64 |
+
transform: translateX(0);
|
65 |
+
display: flex;
|
66 |
+
flex-direction: column;
|
67 |
+
}
|
68 |
+
.nav-bar.hidden {
|
69 |
+
transform: translateX(-100%);
|
70 |
+
width: 0;
|
71 |
+
padding: 0;
|
72 |
+
}
|
73 |
+
.nav-bar h3 {
|
74 |
+
font-size: 1.5em;
|
75 |
+
margin-bottom: 20px;
|
76 |
+
color: var(--primary-color);
|
77 |
+
}
|
78 |
+
.nav-bar ul {
|
79 |
+
list-style: none;
|
80 |
+
flex: 1;
|
81 |
+
}
|
82 |
+
.nav-bar li {
|
83 |
+
padding: 12px 16px;
|
84 |
+
margin-bottom: 12px;
|
85 |
+
background: var(--background-color);
|
86 |
+
border-radius: var(--border-radius);
|
87 |
+
cursor: pointer;
|
88 |
+
display: flex;
|
89 |
+
align-items: center;
|
90 |
+
justify-content: space-between;
|
91 |
+
transition: background 0.3s;
|
92 |
+
font-size: 1em;
|
93 |
+
}
|
94 |
+
.nav-bar li.active,
|
95 |
+
.nav-bar li:hover {
|
96 |
+
background: var(--primary-color);
|
97 |
+
color: #fff;
|
98 |
+
}
|
99 |
+
.session-actions {
|
100 |
+
display: flex;
|
101 |
+
}
|
102 |
+
.session-summary-btn,
|
103 |
+
.session-settings-btn {
|
104 |
+
background: rgba(74, 144, 226, 0.15);
|
105 |
+
border: none;
|
106 |
+
border-radius: 50%;
|
107 |
+
cursor: pointer;
|
108 |
+
display: flex;
|
109 |
+
align-items: center;
|
110 |
+
justify-content: center;
|
111 |
+
transition: transform 0.2s, background 0.3s;
|
112 |
+
color: inherit;
|
113 |
+
}
|
114 |
+
.session-summary-btn:hover,
|
115 |
+
.session-settings-btn:hover {
|
116 |
+
transform: scale(1.1);
|
117 |
+
background: rgba(74, 144, 226, 0.25);
|
118 |
+
}
|
119 |
+
.nav-bar li button.remove-session {
|
120 |
+
background: transparent;
|
121 |
+
border: none;
|
122 |
+
color: inherit;
|
123 |
+
font-size: 1.2em;
|
124 |
+
cursor: pointer;
|
125 |
+
}
|
126 |
+
.new-session-btn {
|
127 |
+
padding: 2px 5px;
|
128 |
+
background: var(--primary-color);
|
129 |
+
color: #fff;
|
130 |
+
border: none;
|
131 |
+
border-radius: var(--border-radius);
|
132 |
+
cursor: pointer;
|
133 |
+
font-size: 1.1em;
|
134 |
+
transition: background 0.3s;
|
135 |
+
border-radius: 6px;
|
136 |
+
}
|
137 |
+
.new-session-btn:hover {
|
138 |
+
background: var(--primary-dark);
|
139 |
+
}
|
140 |
+
/* Toggle Layout Button at bottom of nav-bar */
|
141 |
+
#toggleLayoutBtn {
|
142 |
+
background: var(--primary-color);
|
143 |
+
color: #fff;
|
144 |
+
border: none;
|
145 |
+
padding: 2px 5px;
|
146 |
+
font-size: 1.1em;
|
147 |
+
border-radius: 6px;
|
148 |
+
cursor: pointer;
|
149 |
+
transition: background 0.3s;
|
150 |
+
margin-top: auto;
|
151 |
+
}
|
152 |
+
#toggleLayoutBtn:hover {
|
153 |
+
background: var(--primary-dark);
|
154 |
+
}
|
155 |
+
/* Chat Wrapper */
|
156 |
+
.chat-wrapper {
|
157 |
+
flex: 1;
|
158 |
+
background: #fff;
|
159 |
+
margin: 20px;
|
160 |
+
border-radius: var(--border-radius);
|
161 |
+
box-shadow: 0 12px 40px var(--light-shadow);
|
162 |
+
display: flex;
|
163 |
+
flex-direction: column;
|
164 |
+
overflow: hidden;
|
165 |
+
position: relative;
|
166 |
+
transition: all 0.3s ease;
|
167 |
+
}
|
168 |
+
/* Chat Header */
|
169 |
+
.chat-header {
|
170 |
+
display: flex;
|
171 |
+
align-items: center;
|
172 |
+
gap: 16px;
|
173 |
+
padding: 16px 40px 16px 20px;
|
174 |
+
border-bottom: 1px solid #eee;
|
175 |
+
background: #fff;
|
176 |
+
transition: margin-left 0.3s ease;
|
177 |
+
}
|
178 |
+
/* Left portion of the chat header (hamburger + new chat if collapsed) */
|
179 |
+
.header-left {
|
180 |
+
display: flex;
|
181 |
+
align-items: center;
|
182 |
+
gap: 10px;
|
183 |
+
}
|
184 |
+
.chat-title-controls {
|
185 |
+
display: flex;
|
186 |
+
align-items: center;
|
187 |
+
gap: 10px;
|
188 |
+
flex: 1;
|
189 |
+
justify-content: space-between;
|
190 |
+
}
|
191 |
+
.chat-header h2 {
|
192 |
+
font-size: 1.5em;
|
193 |
+
color: var(--primary-color);
|
194 |
+
margin: 0;
|
195 |
+
flex-grow: 1;
|
196 |
+
white-space: nowrap;
|
197 |
+
}
|
198 |
+
.chat-header button {
|
199 |
+
background: none;
|
200 |
+
border: none;
|
201 |
+
color: white;
|
202 |
+
background-color: var(--primary-color);
|
203 |
+
font-size: 1em;
|
204 |
+
cursor: pointer;
|
205 |
+
transition: color 0.3s;
|
206 |
+
}
|
207 |
+
.chat-header button:hover {
|
208 |
+
color: var(--primary-dark);
|
209 |
+
}
|
210 |
+
#editTitleBtn {
|
211 |
+
color: var(--primary-color);
|
212 |
+
background: none;
|
213 |
+
}
|
214 |
+
/* Turn Label (now part of the header's flex layout) */
|
215 |
+
#turnLabel {
|
216 |
+
background: var(--accent-color);
|
217 |
+
color: #fff;
|
218 |
+
padding: 6px 12px;
|
219 |
+
border-radius: 20px;
|
220 |
+
font-size: 1em;
|
221 |
+
white-space: nowrap;
|
222 |
+
}
|
223 |
+
/* Carousel Wrapper */
|
224 |
+
.carousel-wrapper {
|
225 |
+
position: relative;
|
226 |
+
flex: 1;
|
227 |
+
background: var(--background-color);
|
228 |
+
transition: all 0.3s ease;
|
229 |
+
overflow: hidden;
|
230 |
+
}
|
231 |
+
.carousel-wrapper.traditional-mode {
|
232 |
+
overflow-y: auto;
|
233 |
+
}
|
234 |
+
/* Carousel */
|
235 |
+
.carousel {
|
236 |
+
display: flex;
|
237 |
+
height: 100%;
|
238 |
+
transition: transform 0.5s ease;
|
239 |
+
}
|
240 |
+
.carousel.traditional {
|
241 |
+
flex-direction: column;
|
242 |
+
height: auto;
|
243 |
+
transform: none !important;
|
244 |
+
}
|
245 |
+
/* Card */
|
246 |
+
.card {
|
247 |
+
min-width: 100%;
|
248 |
+
height: 100%;
|
249 |
+
padding: 40px 70px 40px 70px;
|
250 |
+
display: flex;
|
251 |
+
flex-direction: column;
|
252 |
+
overflow-y: auto;
|
253 |
+
transition: all 0.3s ease;
|
254 |
+
}
|
255 |
+
.carousel.traditional .card {
|
256 |
+
margin-bottom: 20px;
|
257 |
+
padding: 20px 40px;
|
258 |
+
height: auto;
|
259 |
+
}
|
260 |
+
.conversation {
|
261 |
+
display: flex;
|
262 |
+
flex-direction: column;
|
263 |
+
gap: 20px;
|
264 |
+
}
|
265 |
+
/* Message Bubbles (Order: User then AI; rendered as Markdown) */
|
266 |
+
.message {
|
267 |
+
padding: 14px 20px;
|
268 |
+
border-radius: 24px;
|
269 |
+
font-size: 1em;
|
270 |
+
line-height: 1.6;
|
271 |
+
max-width: 90%;
|
272 |
+
position: relative;
|
273 |
+
box-shadow: 0 3px 10px var(--light-shadow);
|
274 |
+
transition: background 0.3s, transform 0.3s;
|
275 |
+
}
|
276 |
+
.user {
|
277 |
+
background: #e9f1ff;
|
278 |
+
border: 1px solid #cbdffb;
|
279 |
+
align-self: flex-end;
|
280 |
+
}
|
281 |
+
.ai {
|
282 |
+
background: #fff4e6;
|
283 |
+
border: 1px solid #ffe0b2;
|
284 |
+
align-self: flex-start;
|
285 |
+
}
|
286 |
+
.message-text {
|
287 |
+
display: block;
|
288 |
+
max-height: 80px;
|
289 |
+
overflow: hidden;
|
290 |
+
transition: max-height 0.3s ease;
|
291 |
+
}
|
292 |
+
.message.expanded .message-text {
|
293 |
+
max-height: none;
|
294 |
+
}
|
295 |
+
/* Ensure full AI message text is visible */
|
296 |
+
.ai .message-text {
|
297 |
+
max-height: none !important;
|
298 |
+
overflow: visible !important;
|
299 |
+
}
|
300 |
+
.ai .ai-status {
|
301 |
+
margin-top: 8px;
|
302 |
+
padding-top: 4px;
|
303 |
+
border-top: 1px solid rgba(0, 0, 0, 0.1);
|
304 |
+
font-size: 0.7em;
|
305 |
+
color: #666;
|
306 |
+
text-align: right;
|
307 |
+
}
|
308 |
+
.toggle-btn {
|
309 |
+
background: none;
|
310 |
+
border: none;
|
311 |
+
color: var(--primary-color);
|
312 |
+
cursor: pointer;
|
313 |
+
font-size: 0.85em;
|
314 |
+
margin-top: 6px;
|
315 |
+
padding: 0;
|
316 |
+
}
|
317 |
+
.vertical-file-list {
|
318 |
+
margin-bottom: 10px;
|
319 |
+
display: flex;
|
320 |
+
flex-direction: column;
|
321 |
+
gap: 5px;
|
322 |
+
}
|
323 |
+
.file-item-vertical {
|
324 |
+
background: #f0f0f0;
|
325 |
+
padding: 6px 10px;
|
326 |
+
border-radius: var(--border-radius);
|
327 |
+
font-size: 0.9em;
|
328 |
+
}
|
329 |
+
/* Navigation Buttons */
|
330 |
+
.nav {
|
331 |
+
position: absolute;
|
332 |
+
top: 50%;
|
333 |
+
transform: translateY(-50%);
|
334 |
+
width: 50px;
|
335 |
+
height: 50px;
|
336 |
+
background: rgba(255, 255, 255, 0);
|
337 |
+
/* backdrop-filter: blur(4px); */
|
338 |
+
border: none;
|
339 |
+
border-radius: 50%;
|
340 |
+
display: flex;
|
341 |
+
align-items: center;
|
342 |
+
justify-content: center;
|
343 |
+
cursor: pointer;
|
344 |
+
z-index: 10;
|
345 |
+
transition: transform 0.3s, background 0.3s;
|
346 |
+
}
|
347 |
+
.nav:hover {
|
348 |
+
transform: translateY(-50%) scale(1.1);
|
349 |
+
background: rgba(255, 255, 255, 0.8);
|
350 |
+
}
|
351 |
+
.nav:disabled {
|
352 |
+
opacity: 0.5;
|
353 |
+
cursor: default;
|
354 |
+
}
|
355 |
+
#prevBtn {
|
356 |
+
left: 1px;
|
357 |
+
}
|
358 |
+
#nextBtn {
|
359 |
+
right: 1px;
|
360 |
+
}
|
361 |
+
.carousel.traditional ~ .nav {
|
362 |
+
display: none;
|
363 |
+
}
|
364 |
+
/* Input Section */
|
365 |
+
.input-container {
|
366 |
+
padding: 20px;
|
367 |
+
background: #fafafa;
|
368 |
+
border-top: 1px solid #eee;
|
369 |
+
display: flex;
|
370 |
+
flex-direction: column;
|
371 |
+
}
|
372 |
+
.file-attachments {
|
373 |
+
display: flex;
|
374 |
+
flex-direction: row;
|
375 |
+
gap: 12px;
|
376 |
+
overflow-x: auto;
|
377 |
+
padding-bottom: 12px;
|
378 |
+
scrollbar-width: thin;
|
379 |
+
}
|
380 |
+
.file-attachments::-webkit-scrollbar {
|
381 |
+
height: 6px;
|
382 |
+
}
|
383 |
+
.file-attachments::-webkit-scrollbar-track {
|
384 |
+
background: #f1f1f1;
|
385 |
+
}
|
386 |
+
.file-attachments::-webkit-scrollbar-thumb {
|
387 |
+
background: #ccc;
|
388 |
+
border-radius: 3px;
|
389 |
+
}
|
390 |
+
.file-attachments::-webkit-scrollbar-thumb:hover {
|
391 |
+
background: #999;
|
392 |
+
}
|
393 |
+
.file-item {
|
394 |
+
background: #f0f0f0;
|
395 |
+
padding: 8px 12px;
|
396 |
+
border-radius: 6px;
|
397 |
+
display: flex;
|
398 |
+
align-items: center;
|
399 |
+
gap: 8px;
|
400 |
+
font-size: 0.9em;
|
401 |
+
}
|
402 |
+
.file-item button {
|
403 |
+
background: none;
|
404 |
+
border: none;
|
405 |
+
color: var(--primary-color);
|
406 |
+
cursor: pointer;
|
407 |
+
font-size: 1em;
|
408 |
+
}
|
409 |
+
.input-row {
|
410 |
+
display: flex;
|
411 |
+
align-items: center;
|
412 |
+
gap: 12px;
|
413 |
+
flex-wrap: wrap;
|
414 |
+
background: #fff;
|
415 |
+
border: 1px solid #ddd;
|
416 |
+
border-radius: 24px;
|
417 |
+
padding: 6px 12px;
|
418 |
+
}
|
419 |
+
.attach-button {
|
420 |
+
background: none;
|
421 |
+
border: none;
|
422 |
+
font-size: 1.4em;
|
423 |
+
cursor: pointer;
|
424 |
+
color: #999;
|
425 |
+
display: flex;
|
426 |
+
align-items: center;
|
427 |
+
justify-content: center;
|
428 |
+
width: 36px;
|
429 |
+
height: 36px;
|
430 |
+
border-radius: 50%;
|
431 |
+
transition: background 0.3s;
|
432 |
+
}
|
433 |
+
.attach-button:hover {
|
434 |
+
background: #f0f0f0;
|
435 |
+
}
|
436 |
+
#fileInput {
|
437 |
+
display: none;
|
438 |
+
}
|
439 |
+
#chatInput {
|
440 |
+
flex: 1;
|
441 |
+
border: none;
|
442 |
+
outline: none;
|
443 |
+
resize: none;
|
444 |
+
overflow: hidden;
|
445 |
+
font-size: 1em;
|
446 |
+
height: 36px;
|
447 |
+
line-height: 36px;
|
448 |
+
margin: 0;
|
449 |
+
padding: 0 8px;
|
450 |
+
}
|
451 |
+
#chatInput::placeholder {
|
452 |
+
color: #999;
|
453 |
+
}
|
454 |
+
#sendBtn svg {
|
455 |
+
width: 16px !important;
|
456 |
+
height: 16px !important;
|
457 |
+
}
|
458 |
+
#sendBtn {
|
459 |
+
background: none;
|
460 |
+
border: none;
|
461 |
+
cursor: pointer;
|
462 |
+
width: 36px;
|
463 |
+
height: 36px;
|
464 |
+
border-radius: 50%;
|
465 |
+
display: flex;
|
466 |
+
align-items: center;
|
467 |
+
justify-content: center;
|
468 |
+
color: #999;
|
469 |
+
transition: background 0.3s;
|
470 |
+
}
|
471 |
+
#sendBtn:hover {
|
472 |
+
background: #f0f0f0;
|
473 |
+
}
|
474 |
+
/* Summary Overlay */
|
475 |
+
#summaryOverlay {
|
476 |
+
position: fixed;
|
477 |
+
left: 0;
|
478 |
+
right: 0;
|
479 |
+
bottom: 0;
|
480 |
+
height: 60%;
|
481 |
+
background: #fff;
|
482 |
+
box-shadow: 0 -4px 16px var(--light-shadow);
|
483 |
+
transform: translateY(100%);
|
484 |
+
transition: transform 0.3s ease;
|
485 |
+
z-index: 20;
|
486 |
+
display: flex;
|
487 |
+
flex-direction: column;
|
488 |
+
}
|
489 |
+
#summaryOverlay.active {
|
490 |
+
transform: translateY(0);
|
491 |
+
}
|
492 |
+
.summary-header {
|
493 |
+
padding-left: 20px;
|
494 |
+
padding-right: 20px;
|
495 |
+
padding-top: 10px;
|
496 |
+
padding-bottom: 10px;
|
497 |
+
background: var(--primary-color);
|
498 |
+
color: #fff;
|
499 |
+
font-size: 1.4em;
|
500 |
+
display: flex;
|
501 |
+
justify-content: space-between;
|
502 |
+
align-items: center;
|
503 |
+
}
|
504 |
+
.summary-header-buttons {
|
505 |
+
display: flex;
|
506 |
+
gap: 12px;
|
507 |
+
}
|
508 |
+
.download-summary {
|
509 |
+
background: #fff;
|
510 |
+
color: var(--primary-color);
|
511 |
+
border: 1px solid var(--primary-color);
|
512 |
+
border-radius: 6px;
|
513 |
+
padding: 6px 12px;
|
514 |
+
cursor: pointer;
|
515 |
+
transition: background 0.3s, color 0.3s;
|
516 |
+
}
|
517 |
+
.download-summary:hover {
|
518 |
+
background: var(--primary-color);
|
519 |
+
color: #fff;
|
520 |
+
}
|
521 |
+
.close-summary {
|
522 |
+
background: none;
|
523 |
+
border: none;
|
524 |
+
color: #fff;
|
525 |
+
font-size: 1.4em;
|
526 |
+
cursor: pointer;
|
527 |
+
}
|
528 |
+
.summary-content {
|
529 |
+
padding: 20px;
|
530 |
+
overflow-y: auto;
|
531 |
+
flex: 1;
|
532 |
+
}
|
533 |
+
/* Settings Overlay */
|
534 |
+
#settingsOverlay {
|
535 |
+
position: fixed;
|
536 |
+
left: 0;
|
537 |
+
right: 0;
|
538 |
+
bottom: 0;
|
539 |
+
height: 40%;
|
540 |
+
background: #fff;
|
541 |
+
box-shadow: 0 -4px 16px var(--light-shadow);
|
542 |
+
transform: translateY(100%);
|
543 |
+
transition: transform 0.3s ease;
|
544 |
+
z-index: 20;
|
545 |
+
display: flex;
|
546 |
+
flex-direction: column;
|
547 |
+
}
|
548 |
+
#settingsOverlay.active {
|
549 |
+
transform: translateY(0);
|
550 |
+
}
|
551 |
+
.settings-header {
|
552 |
+
padding-left: 20px;
|
553 |
+
padding-right: 20px;
|
554 |
+
padding-top: 10px;
|
555 |
+
padding-bottom: 10px;
|
556 |
+
background: var(--primary-color);
|
557 |
+
color: #fff;
|
558 |
+
font-size: 1.4em;
|
559 |
+
display: flex;
|
560 |
+
justify-content: space-between;
|
561 |
+
align-items: center;
|
562 |
+
}
|
563 |
+
.close-settings {
|
564 |
+
background: none;
|
565 |
+
border: none;
|
566 |
+
color: #fff;
|
567 |
+
font-size: 1.6em;
|
568 |
+
cursor: pointer;
|
569 |
+
}
|
570 |
+
.settings-content {
|
571 |
+
padding: 20px;
|
572 |
+
overflow-y: auto;
|
573 |
+
flex: 1;
|
574 |
+
font-size: 1.1em;
|
575 |
+
line-height: 1.5;
|
576 |
+
}
|
577 |
+
.settings-group {
|
578 |
+
margin-bottom: 20px;
|
579 |
+
display: flex;
|
580 |
+
align-items: center;
|
581 |
+
gap: 12px;
|
582 |
+
}
|
583 |
+
.settings-group label {
|
584 |
+
min-width: 120px;
|
585 |
+
}
|
586 |
+
.save-settings {
|
587 |
+
background: var(--primary-color);
|
588 |
+
color: #fff;
|
589 |
+
border: none;
|
590 |
+
border-radius: 8px;
|
591 |
+
padding: 12px 24px;
|
592 |
+
cursor: pointer;
|
593 |
+
transition: background 0.3s;
|
594 |
+
}
|
595 |
+
.save-settings:hover {
|
596 |
+
background: var(--primary-dark);
|
597 |
+
}
|
598 |
+
@media (max-width: 600px) {
|
599 |
+
.nav-bar {
|
600 |
+
display: none;
|
601 |
+
}
|
602 |
+
.chat-wrapper {
|
603 |
+
margin: 0;
|
604 |
+
}
|
605 |
+
.message {
|
606 |
+
font-size: 0.95em;
|
607 |
+
max-width: 80%;
|
608 |
+
}
|
609 |
+
.nav {
|
610 |
+
width: 40px;
|
611 |
+
height: 40px;
|
612 |
+
font-size: 1.5em;
|
613 |
+
}
|
614 |
+
.card {
|
615 |
+
padding: 80px 20px 20px 20px;
|
616 |
+
}
|
617 |
+
.input-container {
|
618 |
+
padding: 15px;
|
619 |
+
}
|
620 |
+
.input-row {
|
621 |
+
flex-direction: row;
|
622 |
+
}
|
623 |
+
.input-container button {
|
624 |
+
width: auto;
|
625 |
+
}
|
626 |
+
#summaryOverlay,
|
627 |
+
#settingsOverlay {
|
628 |
+
width: 80%;
|
629 |
+
}
|
630 |
+
}
|
631 |
+
|
632 |
+
.svg-icon {
|
633 |
+
width: 32px; /* Adjust as needed */
|
634 |
+
height: 32px; /* Adjust as needed */
|
635 |
+
filter: invert(100%) brightness(300%);
|
636 |
+
}
|
637 |
+
|
638 |
+
.svg-icon-non-white {
|
639 |
+
width: 24px; /* Adjust as needed */
|
640 |
+
height: 24px; /* Adjust as needed */
|
641 |
+
/* filter: invert(100%) brightness(300%); */
|
642 |
+
}
|
643 |
+
|
644 |
+
.button-row {
|
645 |
+
display: flex;
|
646 |
+
justify-content: center;
|
647 |
+
gap: 10px; /* Optional: adjust spacing between buttons */
|
648 |
+
margin-bottom: 10px; /* Space between buttons and the rest of the input area */
|
649 |
+
}
|
650 |
+
|
651 |
+
.button-row button {
|
652 |
+
padding: 8px 16px;
|
653 |
+
background: var(--primary-color);
|
654 |
+
color: #fff;
|
655 |
+
border: none;
|
656 |
+
border-radius: var(--border-radius);
|
657 |
+
cursor: pointer;
|
658 |
+
transition: background 0.3s;
|
659 |
+
}
|
660 |
+
|
661 |
+
.button-row button:hover {
|
662 |
+
background: var(--primary-dark);
|
663 |
+
}
|
664 |
+
|
665 |
+
#customBtn2 {
|
666 |
+
padding-top: 4px;
|
667 |
+
padding-bottom: 4px;
|
668 |
+
padding-left: 12px;
|
669 |
+
padding-right: 12px;
|
670 |
+
font-size: 0.98em;
|
671 |
+
}
|
672 |
+
|
673 |
+
#customBtn2 img {
|
674 |
+
width: 24px;
|
675 |
+
height: 24px;
|
676 |
+
}
|
677 |
+
|
678 |
+
/* Markdown */
|
679 |
+
.markdown-body {
|
680 |
+
font-size: 1em;
|
681 |
+
line-height: 1.5;
|
682 |
+
white-space: normal; /* Ensure paragraphs and lists break onto new lines */
|
683 |
+
}
|
684 |
+
|
685 |
+
.markdown-body p {
|
686 |
+
margin: 0.75em 0; /* Add vertical space between paragraphs */
|
687 |
+
}
|
688 |
+
|
689 |
+
.markdown-body ul,
|
690 |
+
.markdown-body ol {
|
691 |
+
margin: 0.75em 0;
|
692 |
+
padding-left: 1.5em; /* Indent bullets/numbers */
|
693 |
+
}
|
694 |
+
|
695 |
+
.markdown-body li {
|
696 |
+
margin: 0.3em 0;
|
697 |
+
}
|
698 |
+
|
699 |
+
.markdown-body h1,
|
700 |
+
.markdown-body h2,
|
701 |
+
.markdown-body h3,
|
702 |
+
.markdown-body h4,
|
703 |
+
.markdown-body h5,
|
704 |
+
.markdown-body h6 {
|
705 |
+
margin-top: 1em;
|
706 |
+
margin-bottom: 0.5em;
|
707 |
+
font-weight: bold;
|
708 |
+
}
|
709 |
+
|
standalone/vertical.svg
ADDED
|