Spaces:
Sleeping
Sleeping
add descriptions
Browse files- app.py +8 -0
- pages/chat.py +7 -1
- pages/rag.py +4 -1
- pages/sql.py +7 -0
app.py
CHANGED
@@ -6,6 +6,14 @@ st.set_page_config(
|
|
6 |
|
7 |
st.sidebar.success("Select a demo above.")
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
import streamlit as st
|
11 |
import leafmap.maplibregl as leafmap
|
|
|
6 |
|
7 |
st.sidebar.success("Select a demo above.")
|
8 |
|
9 |
+
st.title("Exploring LLM Agent Use")
|
10 |
+
|
11 |
+
'''
|
12 |
+
Select any of the demos on the sidebar. Each illustrates a different way we can incorporate an LLM tool to perform reliable data retrieval (sometimes called retrieval augmented generation, RAG) from specified data resources.
|
13 |
+
|
14 |
+
In this module, you will be adapt one or more of these agents into an interactive application exploring the redlining data we encountered in Module 3 (as seen below).
|
15 |
+
|
16 |
+
'''
|
17 |
|
18 |
import streamlit as st
|
19 |
import leafmap.maplibregl as leafmap
|
pages/chat.py
CHANGED
@@ -3,8 +3,14 @@ from openai import OpenAI
|
|
3 |
|
4 |
st.title("Chat Demo")
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
with st.sidebar:
|
7 |
-
model = st.radio("Select an LLM:", ['
|
8 |
st.session_state["model"] = model
|
9 |
|
10 |
## dockerized streamlit app wants to read from os.getenv(), otherwise use st.secrets
|
|
|
3 |
|
4 |
st.title("Chat Demo")
|
5 |
|
6 |
+
'''
|
7 |
+
This application presents a traditional chat interface to a range of open source or open weights models running on the National Research Platform (<https://nrp.ai>). Unlike the other two demos, this pattern does not use specified data resources.
|
8 |
+
|
9 |
+
'''
|
10 |
+
|
11 |
+
|
12 |
with st.sidebar:
|
13 |
+
model = st.radio("Select an LLM:", ['olmo', 'gemma2', 'phi3', 'llama3', 'embed-mistral', 'mixtral', 'gorilla', 'groq-tools', 'llava'])
|
14 |
st.session_state["model"] = model
|
15 |
|
16 |
## dockerized streamlit app wants to read from os.getenv(), otherwise use st.secrets
|
pages/rag.py
CHANGED
@@ -12,8 +12,11 @@ st.title("RAG Demo")
|
|
12 |
|
13 |
|
14 |
'''
|
|
|
|
|
|
|
15 |
Provide a URL to a PDF document you want to ask questions about.
|
16 |
-
Once the document has been uploaded and parsed, ask your questions in the chat dialog that will appear below.
|
17 |
'''
|
18 |
|
19 |
# Create a file uploader?
|
|
|
12 |
|
13 |
|
14 |
'''
|
15 |
+
This demonstration combines an LLM trained specifically text embedding (`embed-mistral` in our case) with a traditional "instruct" based LLM (`llama3`) to create a retreival augmented generation (RAG) interface to a provided PDF document. We can query the model to return relatively precise citations to the matched text in the PDF document to verify the queries.
|
16 |
+
|
17 |
+
|
18 |
Provide a URL to a PDF document you want to ask questions about.
|
19 |
+
Once the document has been uploaded and parsed, ask your questions in the chat dialog that will appear below. The default example comes from a recent report on California's initiative for biodiversity conservation.
|
20 |
'''
|
21 |
|
22 |
# Create a file uploader?
|
pages/sql.py
CHANGED
@@ -2,6 +2,13 @@ import streamlit as st
|
|
2 |
|
3 |
st.title("SQL demo")
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
## dockerized streamlit app wants to read from os.getenv(), otherwise use st.secrets
|
6 |
import os
|
7 |
api_key = os.getenv("LITELLM_KEY")
|
|
|
2 |
|
3 |
st.title("SQL demo")
|
4 |
|
5 |
+
|
6 |
+
'''
|
7 |
+
This demonstration illustrates building an LLM-based agent that performs tasks by generating and executing code based on plain-text queries. In this example, we use a custom system prompt to instruct the LLM to generate SQL code which is then executed against the parquet data we generated in Module 3. Note that SQL query itself is shown as well as the table produced by the query.
|
8 |
+
|
9 |
+
|
10 |
+
'''
|
11 |
+
|
12 |
## dockerized streamlit app wants to read from os.getenv(), otherwise use st.secrets
|
13 |
import os
|
14 |
api_key = os.getenv("LITELLM_KEY")
|