Update app.py
Browse files
app.py
CHANGED
@@ -117,22 +117,110 @@ tool_loader = ToolLoader(tool_names)
|
|
117 |
st.title("Hugging Face Agent and tools")
|
118 |
|
119 |
## LB https://huggingface.co/spaces/qiantong-xu/toolbench-leaderboard
|
120 |
-
# Add a dropdown for selecting the inference URL
|
121 |
-
url_endpoint = st.selectbox("Select Inference URL", [
|
122 |
-
"https://api-inference.huggingface.co/models/bigcode/starcoder",
|
123 |
-
"https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
|
124 |
-
"https://api-inference.huggingface.co/models/gpt2"
|
125 |
-
])
|
126 |
|
127 |
st.markdown("Welcome to the Hugging Face Agent and Tools app! This app allows you to interact with various tools using the Hugging Face API.")
|
128 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
|
|
|
|
|
|
|
|
136 |
# Hugging Face Agent and Tools App
|
137 |
|
138 |
## Description
|
@@ -186,139 +274,60 @@ with st.expander("Expand to see description"):
|
|
186 |
|
187 |
''')
|
188 |
|
189 |
-
#
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
# Hugging Face Agent and Tools Code Overview
|
195 |
-
|
196 |
-
## Overview
|
197 |
-
The provided Python code implements an interactive Streamlit web application that allows users to interact with various tools through the Hugging Face API. The app integrates Hugging Face models and tools, enabling users to perform tasks such as text generation, sentiment analysis, and more.
|
198 |
-
|
199 |
-
## Imports
|
200 |
-
The code imports several external libraries and modules, including:
|
201 |
-
- `streamlit`: For building the web application.
|
202 |
-
- `os`: For interacting with the operating system.
|
203 |
-
- `base64`, `io`, `Image` (from `PIL`), `AudioSegment` (from `pydub`), `IPython`, `sf`: For handling images and audio.
|
204 |
-
- `requests`: For making HTTP requests.
|
205 |
-
- `pandas`: For working with DataFrames.
|
206 |
-
- `matplotlib.figure`, `numpy`: For visualization.
|
207 |
-
- `altair`, `Plot` (from `bokeh.models`), `px` (from `plotly.express`), `pdk` (from `pydeck`): For different charting libraries.
|
208 |
-
- `time`: For handling time-related operations.
|
209 |
-
- `transformers`: For loading tools and agents.
|
210 |
-
|
211 |
-
## ToolLoader Class
|
212 |
-
The `ToolLoader` class is responsible for loading tools based on their names. It has methods to load tools from a list of tool names and handles potential errors during loading.
|
213 |
-
|
214 |
-
## CustomHfAgent Class
|
215 |
-
The `CustomHfAgent` class extends the base `Agent` class from the `transformers` module. It is designed to interact with a remote inference API and includes methods for generating text based on a given prompt.
|
216 |
-
|
217 |
-
## Tool Loading and Customization
|
218 |
-
- Tool names are defined in the `tool_names` list.
|
219 |
-
- The `ToolLoader` instance (`tool_loader`) loads tools based on the provided names.
|
220 |
-
- The `CustomHfAgent` instance (`agent`) is created with a specified URL endpoint, token, and additional tools.
|
221 |
-
- New tools can be added by appending their names to the `tool_names` list.
|
222 |
-
|
223 |
-
## Streamlit App
|
224 |
-
The Streamlit app is structured as follows:
|
225 |
-
1. Tool selection dropdown for choosing the inference URL.
|
226 |
-
2. An expander for displaying tool descriptions.
|
227 |
-
3. An expander for selecting tools.
|
228 |
-
4. Examples and instructions for the user.
|
229 |
-
5. A chat interface for user interactions.
|
230 |
-
6. Handling of user inputs, tool selection, and agent responses.
|
231 |
-
|
232 |
-
## Handling of Responses
|
233 |
-
The code handles various types of responses from the agent, including images, audio, text, DataFrames, and charts. The responses are displayed in the Streamlit app based on their types.
|
234 |
-
|
235 |
-
## How to Run
|
236 |
-
1. Install required dependencies with `pip install -r requirements.txt`.
|
237 |
-
2. Run the app with `streamlit run <filename.py>`.
|
238 |
-
|
239 |
-
## Notes
|
240 |
-
- The code emphasizes customization and extensibility, allowing developers to easily add new tools and interact with the Hugging Face API.
|
241 |
-
- Ensure proper configuration, such as setting the Hugging Face token as an environment variable.
|
242 |
-
|
243 |
-
''')
|
244 |
-
|
245 |
-
# Add an expandable element for tools
|
246 |
-
with st.expander("Expand to select tools"):
|
247 |
-
|
248 |
-
# Examples for the user perspective
|
249 |
-
st.markdown("### Examples:")
|
250 |
-
st.markdown("1. **Generate a Random Character**:")
|
251 |
-
st.markdown(" - Choose the desired URL and the 'Random Character Tool'.")
|
252 |
-
|
253 |
-
st.markdown("2. **Sentiment Analysis**:")
|
254 |
-
st.markdown(" - Choose the desired URL and the 'Sentiment Analysis Tool'.")
|
255 |
-
st.markdown(" - Sample: What is the sentiment for \"Hello, I am happy\"?")
|
256 |
|
257 |
-
|
258 |
-
|
259 |
-
|
260 |
-
|
261 |
-
|
262 |
-
|
263 |
-
|
264 |
-
|
265 |
-
|
266 |
-
|
267 |
-
|
268 |
-
|
269 |
-
|
270 |
-
|
271 |
-
|
272 |
-
|
273 |
-
|
274 |
-
if user_message := st.chat_input("Enter message"):
|
275 |
-
st.chat_message("user").markdown(user_message)
|
276 |
-
st.session_state.messages.append({"role": "user", "content": user_message})
|
277 |
-
|
278 |
-
selected_tools = [tool_loader.tools[idx] for idx, checkbox in enumerate(tool_checkboxes) if checkbox]
|
279 |
-
# Handle submission with the selected inference URL
|
280 |
-
response = handle_submission(user_message, selected_tools, url_endpoint)
|
281 |
-
|
282 |
-
|
283 |
-
with st.chat_message("assistant"):
|
284 |
-
if response is None:
|
285 |
-
st.warning("The agent's response is None. Please try again. Generate an image of a flying horse.")
|
286 |
-
elif isinstance(response, Image.Image):
|
287 |
-
st.image(response)
|
288 |
-
elif isinstance(response, AudioSegment):
|
289 |
-
st.audio(response)
|
290 |
-
elif isinstance(response, int):
|
291 |
-
st.markdown(response)
|
292 |
-
elif isinstance(response, str):
|
293 |
-
if "emojified_text" in response:
|
294 |
-
st.markdown(f"{response['emojified_text']}")
|
295 |
-
else:
|
296 |
-
st.markdown(response)
|
297 |
-
elif isinstance(response, list):
|
298 |
-
for item in response:
|
299 |
-
st.markdown(item) # Assuming the list contains strings
|
300 |
-
elif isinstance(response, pd.DataFrame):
|
301 |
-
st.dataframe(response)
|
302 |
-
elif isinstance(response, pd.Series):
|
303 |
-
st.table(response.iloc[0:10])
|
304 |
-
elif isinstance(response, dict):
|
305 |
-
st.json(response)
|
306 |
-
elif isinstance(response, st.graphics_altair.AltairChart):
|
307 |
-
st.altair_chart(response)
|
308 |
-
elif isinstance(response, st.graphics_bokeh.BokehChart):
|
309 |
-
st.bokeh_chart(response)
|
310 |
-
elif isinstance(response, st.graphics_graphviz.GraphvizChart):
|
311 |
-
st.graphviz_chart(response)
|
312 |
-
elif isinstance(response, st.graphics_plotly.PlotlyChart):
|
313 |
-
st.plotly_chart(response)
|
314 |
-
elif isinstance(response, st.graphics_pydeck.PydeckChart):
|
315 |
-
st.pydeck_chart(response)
|
316 |
-
elif isinstance(response, matplotlib.figure.Figure):
|
317 |
-
st.pyplot(response)
|
318 |
-
elif isinstance(response, streamlit.graphics_vega_lite.VegaLiteChart):
|
319 |
-
st.vega_lite_chart(response)
|
320 |
-
else:
|
321 |
-
st.warning("Unrecognized response type. Please try again. e.g. Generate an image of a flying horse.")
|
322 |
-
|
323 |
|
324 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
st.title("Hugging Face Agent and tools")
|
118 |
|
119 |
## LB https://huggingface.co/spaces/qiantong-xu/toolbench-leaderboard
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
|
121 |
st.markdown("Welcome to the Hugging Face Agent and Tools app! This app allows you to interact with various tools using the Hugging Face API.")
|
122 |
|
123 |
+
# Create a page with tabs
|
124 |
+
tabs = st.tabs(["Chat", "URL and Tools", "User Description", "Developers"])
|
125 |
+
|
126 |
+
# Tab 1: Chat
|
127 |
+
if tabs[0]:
|
128 |
+
with st.beta_expander("Chat"):
|
129 |
+
# Chat code (user input, agent responses, etc.)
|
130 |
+
if "messages" not in st.session_state:
|
131 |
+
st.session_state.messages = []
|
132 |
+
|
133 |
+
for message in st.session_state.messages:
|
134 |
+
with st.chat_message(message["role"]):
|
135 |
+
st.markdown(message["content"])
|
136 |
+
|
137 |
+
with st.chat_message("assistant"):
|
138 |
+
st.markdown("Hello there! How can I assist you today?")
|
139 |
+
|
140 |
+
if user_message := st.chat_input("Enter message"):
|
141 |
+
st.chat_message("user").markdown(user_message)
|
142 |
+
st.session_state.messages.append({"role": "user", "content": user_message})
|
143 |
+
|
144 |
+
selected_tools = [tool_loader.tools[idx] for idx, checkbox in enumerate(tool_checkboxes) if checkbox]
|
145 |
+
# Handle submission with the selected inference URL
|
146 |
+
response = handle_submission(user_message, selected_tools, url_endpoint)
|
147 |
+
|
148 |
+
with st.chat_message("assistant"):
|
149 |
+
if response is None:
|
150 |
+
st.warning("The agent's response is None. Please try again. Generate an image of a flying horse.")
|
151 |
+
elif isinstance(response, Image.Image):
|
152 |
+
st.image(response)
|
153 |
+
elif isinstance(response, AudioSegment):
|
154 |
+
st.audio(response)
|
155 |
+
elif isinstance(response, int):
|
156 |
+
st.markdown(response)
|
157 |
+
elif isinstance(response, str):
|
158 |
+
if "emojified_text" in response:
|
159 |
+
st.markdown(f"{response['emojified_text']}")
|
160 |
+
else:
|
161 |
+
st.markdown(response)
|
162 |
+
elif isinstance(response, list):
|
163 |
+
for item in response:
|
164 |
+
st.markdown(item) # Assuming the list contains strings
|
165 |
+
elif isinstance(response, pd.DataFrame):
|
166 |
+
st.dataframe(response)
|
167 |
+
elif isinstance(response, pd.Series):
|
168 |
+
st.table(response.iloc[0:10])
|
169 |
+
elif isinstance(response, dict):
|
170 |
+
st.json(response)
|
171 |
+
elif isinstance(response, st.graphics_altair.AltairChart):
|
172 |
+
st.altair_chart(response)
|
173 |
+
elif isinstance(response, st.graphics_bokeh.BokehChart):
|
174 |
+
st.bokeh_chart(response)
|
175 |
+
elif isinstance(response, st.graphics_graphviz.GraphvizChart):
|
176 |
+
st.graphviz_chart(response)
|
177 |
+
elif isinstance(response, st.graphics_plotly.PlotlyChart):
|
178 |
+
st.plotly_chart(response)
|
179 |
+
elif isinstance(response, st.graphics_pydeck.PydeckChart):
|
180 |
+
st.pydeck_chart(response)
|
181 |
+
elif isinstance(response, matplotlib.figure.Figure):
|
182 |
+
st.pyplot(response)
|
183 |
+
elif isinstance(response, streamlit.graphics_vega_lite.VegaLiteChart):
|
184 |
+
st.vega_lite_chart(response)
|
185 |
+
else:
|
186 |
+
st.warning("Unrecognized response type. Please try again. e.g. Generate an image of a flying horse.")
|
187 |
+
|
188 |
+
st.session_state.messages.append({"role": "assistant", "content": response})
|
189 |
+
|
190 |
+
# Tab 2: URL and Tools
|
191 |
+
elif tabs[1]:
|
192 |
+
with st.beta_expander("URL and Tools"):
|
193 |
+
# Code for URL and Tools checkboxes
|
194 |
+
|
195 |
+
# Examples for the user perspective
|
196 |
+
st.markdown("### Examples:")
|
197 |
+
st.markdown("1. **Generate a Random Character**:")
|
198 |
+
st.markdown(" - Choose the desired URL and the 'Random Character Tool'.")
|
199 |
+
|
200 |
+
st.markdown("2. **Sentiment Analysis**:")
|
201 |
+
st.markdown(" - Choose the desired URL and the 'Sentiment Analysis Tool'.")
|
202 |
+
st.markdown(" - Sample: What is the sentiment for \"Hello, I am happy\"?")
|
203 |
+
|
204 |
+
st.markdown("3. **Word Count**:")
|
205 |
+
st.markdown(" - Choose the desired URL and the 'Word Counter Tool'.")
|
206 |
+
st.markdown(" - Sample: Count the words in \"Hello, I am Christof\".")
|
207 |
|
208 |
+
# Add a dropdown for selecting the inference URL
|
209 |
+
url_endpoint = st.selectbox("Select Inference URL", [
|
210 |
+
"https://api-inference.huggingface.co/models/bigcode/starcoder",
|
211 |
+
"https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
|
212 |
+
"https://api-inference.huggingface.co/models/gpt2"
|
213 |
+
])
|
214 |
|
215 |
+
|
216 |
+
tool_checkboxes = [st.checkbox(f"{tool.name} --- {tool.description} ") for tool in tool_loader.tools]
|
217 |
+
|
218 |
+
# Tab 3: User Description
|
219 |
+
elif tabs[2]:
|
220 |
+
with st.beta_expander("App Description"):
|
221 |
+
# User description content and tool descriptions
|
222 |
+
# Add a section for the app's description
|
223 |
+
st.markdown('''
|
224 |
# Hugging Face Agent and Tools App
|
225 |
|
226 |
## Description
|
|
|
274 |
|
275 |
''')
|
276 |
|
277 |
+
# Tab 4: Developers
|
278 |
+
elif tabs[3]:
|
279 |
+
with st.beta_expander("Developers"):
|
280 |
+
# Developer-related content
|
281 |
+
st.markdown('''
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
282 |
|
283 |
+
# Hugging Face Agent and Tools Code Overview
|
284 |
+
|
285 |
+
## Overview
|
286 |
+
The provided Python code implements an interactive Streamlit web application that allows users to interact with various tools through the Hugging Face API. The app integrates Hugging Face models and tools, enabling users to perform tasks such as text generation, sentiment analysis, and more.
|
287 |
+
|
288 |
+
## Imports
|
289 |
+
The code imports several external libraries and modules, including:
|
290 |
+
- `streamlit`: For building the web application.
|
291 |
+
- `os`: For interacting with the operating system.
|
292 |
+
- `base64`, `io`, `Image` (from `PIL`), `AudioSegment` (from `pydub`), `IPython`, `sf`: For handling images and audio.
|
293 |
+
- `requests`: For making HTTP requests.
|
294 |
+
- `pandas`: For working with DataFrames.
|
295 |
+
- `matplotlib.figure`, `numpy`: For visualization.
|
296 |
+
- `altair`, `Plot` (from `bokeh.models`), `px` (from `plotly.express`), `pdk` (from `pydeck`): For different charting libraries.
|
297 |
+
- `time`: For handling time-related operations.
|
298 |
+
- `transformers`: For loading tools and agents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
299 |
|
300 |
+
## ToolLoader Class
|
301 |
+
The `ToolLoader` class is responsible for loading tools based on their names. It has methods to load tools from a list of tool names and handles potential errors during loading.
|
302 |
+
|
303 |
+
## CustomHfAgent Class
|
304 |
+
The `CustomHfAgent` class extends the base `Agent` class from the `transformers` module. It is designed to interact with a remote inference API and includes methods for generating text based on a given prompt.
|
305 |
+
|
306 |
+
## Tool Loading and Customization
|
307 |
+
- Tool names are defined in the `tool_names` list.
|
308 |
+
- The `ToolLoader` instance (`tool_loader`) loads tools based on the provided names.
|
309 |
+
- The `CustomHfAgent` instance (`agent`) is created with a specified URL endpoint, token, and additional tools.
|
310 |
+
- New tools can be added by appending their names to the `tool_names` list.
|
311 |
+
|
312 |
+
## Streamlit App
|
313 |
+
The Streamlit app is structured as follows:
|
314 |
+
1. Tool selection dropdown for choosing the inference URL.
|
315 |
+
2. An expander for displaying tool descriptions.
|
316 |
+
3. An expander for selecting tools.
|
317 |
+
4. Examples and instructions for the user.
|
318 |
+
5. A chat interface for user interactions.
|
319 |
+
6. Handling of user inputs, tool selection, and agent responses.
|
320 |
+
|
321 |
+
## Handling of Responses
|
322 |
+
The code handles various types of responses from the agent, including images, audio, text, DataFrames, and charts. The responses are displayed in the Streamlit app based on their types.
|
323 |
+
|
324 |
+
## How to Run
|
325 |
+
1. Install required dependencies with `pip install -r requirements.txt`.
|
326 |
+
2. Run the app with `streamlit run <filename.py>`.
|
327 |
+
|
328 |
+
## Notes
|
329 |
+
- The code emphasizes customization and extensibility, allowing developers to easily add new tools and interact with the Hugging Face API.
|
330 |
+
- Ensure proper configuration, such as setting the Hugging Face token as an environment variable.
|
331 |
+
|
332 |
+
''')
|
333 |
+
|