Spaces:
Paused
Paused
File size: 15,880 Bytes
8cd2ee3 6a52c11 8cd2ee3 6a52c11 8cd2ee3 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 ddaa426 6a52c11 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 |
---
title: DeployPythonicRAG
emoji: π
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: apache-2.0
---
# Deploying Pythonic Chat With Your Text File Application
In today's breakout rooms, we will be following the process that you saw during the challenge.
Today, we will repeat the same process - but powered by our Pythonic RAG implementation we created last week.
You'll notice a few differences in the `app.py` logic - as well as a few changes to the `aimakerspace` package to get things working smoothly with Chainlit.
> NOTE: If you want to run this locally - be sure to use `uv sync`, and then `uv run chainlit run app.py` to start the application outside of Docker.
## Reference Diagram (It's Busy, but it works)

### Anatomy of a Chainlit Application
[Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).
> NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
We'll be concerning ourselves with three main scopes:
1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
3. On message - when the users sends a message through the input text box in the Chainlit UI
Let's dig into each scope and see what we're doing!
### On Application Start:
The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
```python
import os
from typing import List
from chainlit.types import AskFileResponse
from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
from aimakerspace.openai_utils.prompts import (
UserRolePrompt,
SystemRolePrompt,
AssistantRolePrompt,
)
from aimakerspace.openai_utils.embedding import EmbeddingModel
from aimakerspace.vectordatabase import VectorDatabase
from aimakerspace.openai_utils.chatmodel import ChatOpenAI
import chainlit as cl
```
Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
```python
system_template = """\
Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
system_role_prompt = SystemRolePrompt(system_template)
user_prompt_template = """\
Context:
{context}
Question:
{question}
"""
user_role_prompt = UserRolePrompt(user_prompt_template)
```
> NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
Following that - we can create the Python Class definition for our RAG pipeline - or _chain_, as we'll refer to it in the rest of this walkthrough.
Let's look at the definition first:
```python
class RetrievalAugmentedQAPipeline:
def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
self.llm = llm
self.vector_db_retriever = vector_db_retriever
async def arun_pipeline(self, user_query: str):
### RETRIEVAL
context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
context_prompt = ""
for context in context_list:
context_prompt += context[0] + "\n"
### AUGMENTED
formatted_system_prompt = system_role_prompt.create_message()
formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
### GENERATION
async def generate_response():
async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
yield chunk
return {"response": generate_response(), "context": context_list}
```
Notice a few things:
1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
2. In essence, our pipeline is _chaining_ a few events together:
1. We take our user query, and chain it into our Vector Database to collect related chunks
2. We take those contexts and our user's questions and chain them into the prompt templates
3. We take that prompt template and chain it into our LLM call
4. We chain the response of the LLM call to the user
3. We are using a lot of `async` again!
Now, we're going to create a helper function for processing uploaded text files.
First, we'll instantiate a shared `CharacterTextSplitter`.
```python
text_splitter = CharacterTextSplitter()
```
Now we can define our helper.
```python
def process_file(file: AskFileResponse):
import tempfile
import shutil
print(f"Processing file: {file.name}")
# Create a temporary file with the correct extension
suffix = f".{file.name.split('.')[-1]}"
with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as temp_file:
# Copy the uploaded file content to the temporary file
shutil.copyfile(file.path, temp_file.name)
print(f"Created temporary file at: {temp_file.name}")
# Create appropriate loader
if file.name.lower().endswith('.pdf'):
loader = PDFLoader(temp_file.name)
else:
loader = TextFileLoader(temp_file.name)
try:
# Load and process the documents
documents = loader.load_documents()
texts = text_splitter.split_texts(documents)
return texts
finally:
# Clean up the temporary file
try:
os.unlink(temp_file.name)
except Exception as e:
print(f"Error cleaning up temporary file: {e}")
```
Simply put, this downloads the file as a temp file, we load it in with `TextFileLoader` and then split it with our `TextSplitter`, and returns that list of strings!
#### β QUESTION #1:
Why do we want to support streaming? What about streaming is important, or useful?
A: Perceived performance. If we didn't have streaming, the user would have to wait for the LLM to finish processing the request before they could see the response. This would be a very slow experience and there is a chance the user might leave before the response is returned.
### On Chat Start:
The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
```python
while files == None:
files = await cl.AskFileMessage(
content="Please upload a Text or PDF file to begin!",
accept=["text/plain", "application/pdf"],
max_size_mb=2,
timeout=180,
).send()
```
Once we've obtained the text file - we'll use our processing helper function to process our text!
After we have processed our text file - we'll need to create a `VectorDatabase` and populate it with our processed chunks and their related embeddings!
```python
vector_db = VectorDatabase()
vector_db = await vector_db.abuild_from_list(texts)
```
Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
```python
retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
vector_db_retriever=vector_db,
llm=chat_openai
)
```
Now, we'll save that into our user session!
> NOTE: Chainlit has some great documentation about [User Session](https://docs.chainlit.io/concepts/user-session).
#### β QUESTION #2:
Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
A: User Session allows us to store a user's conversation independently of each other. This enables users to have separate conversations with an LLM using different documents. It also prevents confidential information from being leaked to other users.
### On Message
First, we load our chain from the user session:
```python
chain = cl.user_session.get("chain")
```
Then, we run the chain on the content of the message - and stream it to the front end - that's it!
```python
msg = cl.Message(content="")
result = await chain.arun_pipeline(message.content)
async for stream_resp in result["response"]:
await msg.stream_token(stream_resp)
```
### π
With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
## Deploying the Application to Hugging Face Space
Due to the way the repository is created - it should be straightforward to deploy this to a Hugging Face Space!
> NOTE: If you wish to go through the local deployments using `uv run chainlit run app.py` and Docker - please feel free to do so!
<details>
<summary>Creating a Hugging Face Space</summary>
1. Navigate to the `Spaces` tab.

2. Click on `Create new Space`

3. Create the Space by providing values in the form. Make sure you've selected "Docker" as your Space SDK.

</details>
<details>
<summary>Adding this Repository to the Newly Created Space</summary>
1. Collect the SSH address from the newly created Space.

> NOTE: The address is the component that starts with `[email protected]:spaces/`.
2. Use the command:
```bash
git remote add hf HF_SPACE_SSH_ADDRESS_HERE
```
3. Use the command:
```bash
git pull hf main --no-rebase --allow-unrelated-histories -X ours
```
4. Use the command:
```bash
git add .
```
5. Use the command:
```bash
git commit -m "Deploying Pythonic RAG"
```
6. Use the command:
```bash
git push hf main
```
7. The Space should automatically build as soon as the push is completed!
> NOTE: The build will fail before you complete the following steps!
</details>
<details>
<summary>Adding OpenAI Secrets to the Space</summary>
1. Navigate to your Space settings.

2. Navigate to `Variables and secrets` on the Settings page and click `New secret`:

3. In the `Name` field - input `OPENAI_API_KEY` in the `Value (private)` field, put your OpenAI API Key.

4. The Space will begin rebuilding!
</details>
## π
You just deployed Pythonic RAG!
Try uploading a text file and asking some questions!
#### β Discussion Question #1:
Upload a PDF file of the recent DeepSeek-R1 paper and ask the following questions:
1. What is RL and how does it help reasoning?
A: Reinforcement Learning (RL) is a type of machine learning technique where an agent learns to make decisions by interacting with an environment. The agent takes actions in the environment and receives feedback in the form of rewards or penalties based on the actions it takes. The goal of the agent is to learn the optimal strategy or policy that maximizes the cumulative reward over time.
In the context of reasoning tasks, RL can help improve reasoning capabilities in language models by allowing the model to learn through trial and error without the need for large amounts of supervised data. By using RL, the model can explore different reasoning behaviors and strategies, gradually improving its performance on reasoning tasks. RL enables the model to adapt and learn from its interactions with the environment, leading to the development of more sophisticated reasoning capabilities over time.
In the research mentioned, RL is used to train language models like DeepSeek-R1-Zero and DeepSeek-R1 to enhance their reasoning abilities. By applying RL directly to the base model or starting from a fine-tuned checkpoint, the models are able to exhibit powerful reasoning behaviors and achieve performance comparable to state-of-the-art models like OpenAI-o1 series. RL helps the models to evolve and improve their reasoning skills through self-learning and exploration, ultimately enhancing their performance on reasoning tasks.
2. What is the difference between DeepSeek-R1 and DeepSeek-R1-Zero?
A: The main difference between DeepSeek-R1 and DeepSeek-R1-Zero lies in their approach to reasoning tasks and the use of data.
##### DeepSeek-R1-Zero:
- DeepSeek-R1-Zero is a model that autonomously develops reasoning behaviors using reinforcement learning.
- It struggles with challenges like poor readability and language mixing.
- It undergoes a self-evolution process driven by RL, improving its reasoning capabilities autonomously.
- It achieves competitive performance on reasoning tasks, surpassing OpenAI-o1-0912.
- It demonstrates expert-level performance in code competition tasks and excels in benchmarks like MMLU, MMLU-Pro, and GPQA Diamond.
##### DeepSeek-R1:
- DeepSeek-R1 is a method that utilizes reinforcement learning with human-friendly cold-start data.
- It aims to make reasoning processes more readable and share them with the open community.
- It explores the incorporation of a small amount of high-quality data as a cold start to potentially improve reasoning performance and accelerate convergence.
- It outperforms DeepSeek-V3 in engineering-related tasks and achieves outstanding results on benchmarks like MMLU, MMLU-Pro, and GPQA Diamond.
In summary, DeepSeek-R1-Zero focuses on autonomous reasoning behavior development through RL, while DeepSeek-R1 emphasizes the use of human-friendly cold-start data to enhance reasoning processes.
3. What is this paper about?
The paper is about a specific topic or research area related to language models, mathematical reasoning, reinforcement learning, or other related fields. To determine the exact topic of the paper, we need to carefully read the title, abstract, introduction, and key sections of the paper. By analyzing the content, methodology, results, and conclusions presented in the paper, we can identify the main focus and contribution of the research.
Does this application pass your vibe check?
A: Yes. It does very well at answering specific questions about the paper.
Are there any immediate pitfalls you're noticing?
A: It does not do well at answering general or summary type questions about the paper. I believe this is because it is answering with only the context provided through the vector store retriever. So unless the paper contains a summary section and those embeddings are returned and provided to the LLM, it will not be able to answer these kinds of questions without a different approach.
One approach would be to identify these kinds of general or summary type questions and opt out of retrieving embeddings from a vector store and instead, either feed the LLM the entire document if there's enough context room or, split the document into several large chunks and summarize each chunk and then summarize the summaries.
## π§ CHALLENGE MODE π§
For the challenge mode, please instead create a simple FastAPI backend with a simple React (or any other JS framework) frontend.
You can use the same prompt templates and RAG pipeline as we did here - but you'll need to modify the code to work with FastAPI and React.
Deploy this application to Hugging Face Spaces!
|