Spaces:
Runtime error
Runtime error
File size: 78,852 Bytes
5058660 643c34b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 |
[initial_prompt]:
Here is the website on: "How to Create a Chatbot with Gradio" -> "https://www.gradio.app/guides/creating-a-chatbot-fast". I believe you can scrape and learn on the go. So could you tell me what tutorial is <next page> (just checking). If you succeed to answer we will proceed with you. I have great proposal to you.
[follow_up]:
Great. Thats correct!!
Please take a deep breath.
Now, let's proceed with the building real world Gradio app.
I will guide you through the process. Let's start with the first step.
# UI/UX for Bagoodex Search API
## Few things to note:
0. UI is just a simple chatbot interface by default. We can devide into two parts. Left side for the chat (e.g., ChatGPT UI) and right side for the Advanced Search options. (It's like Perplexity UI. in any case refer to "https://www.perplexity.ai/".).
1. We will be using Gradio to create a simple UI for the Bagoodex Search API.
2. The API delivers real-time AI-powered web search with NLP capabilities.
3. The API can be used to search for links, images, videos, local maps, and knowledge about a topic.
4. Our Gradio app should be configurable from user side. (refer to [advanced search syntax](#advanced-search-syntax)).
## Requirements:
0. As you will see below output is already known. We need to create classes for each type of search. It makes the code more readable and maintainable.
1. When user enters a query, the defautl chat API endpoint should return the results based on the query.
2. On the right side list the advanced search options (e.g., images, videos).
For example (in NextJS):
<div className="flex flex-col gap-2">
<div className="border border-gray-300 px-3 py-2 rounded-md flex items-center justify-between">
<p>Search Images</p>
<button className="bg-blue-500 text-white px-2 py-1 rounded-md">[plus icon that sends request]</button>
</div>
<div className="border border-gray-300 px-3 py-2 rounded-md flex items-center justify-between">
<p>Search Videos</p>
<button className="bg-blue-500 text-white px-2 py-1 rounded-md">[plus icon that sends request]</button>
</div>
</div>
(It's like Perplexity UI/UX.).
3. On input field we should add several buttons for the rest of the advanced search options.
For example:
1) user can click on "local maps" and activate it. So in addition to the results we should display and render the map using Gradio specific components.
2) user can click on "knowledge about a topic" that will return a structured knowledge base about the topic. If user wants fast and structured information.
## Addition:
1. Create several files and helper functions as needed.
2. Use the provided code snippets to build the app.
3. I am not pushing you to generate all the files and codebase in one shot. You may ask follow up questions and generate rest of the codebase/ files/ functions for Gradio app.
# API request examples:
> model=`bagoodex/bagoodex-search-v1`.
## 1. As a regular chat completion model (but searching on the internet):
Get API Key from `.env` file:
<code_snippet>
```py
import os
from dotenv import load_dotenv
load_dotenv()
AIML_API_KEY = os.getenv('AIML_API_KEY')
```
</code_snippet>
-----
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
# Enter your query here
"content": 'how to make a slingshot',
},
],
)
print(response.choices[0].message.content)
# Run the function
complete_chat()
```
</code_snippet>
### Model Response:
<response>
```
To make a slingshot, you can follow the instructions provided in the two sources:
**Option 1: Make a Giant Slingshot**
* Start by cutting two 2x4's to a length of 40 inches each, which will be the main arms of the slingshot.
* Attach the arms to a base made of plywood using screws, and then add side braces to support the arms.
* Install an exercise band as the launching mechanism, making sure to tighten it to achieve the desired distance.
* Add a cross brace to keep the arms rigid and prevent them from spreading or caving in.
**Option 2: Make a Stick Slingshot**
* Find a sturdy, Y-shaped stick and break it down to the desired shape.
* Cut notches on the ends of the stick to hold the rubber bands in place.
* Create a pouch by folding a piece of fabric in half and then half again, and then cutting small holes for the rubber bands.
* Thread the rubber bands through the holes and tie them securely to the stick using thread.
* Decorate the slingshot with coloured yarn or twine if desired.
You can choose to make either a giant slingshot or a stick slingshot, depending on your preference and the materials available.
```
</response>
----
## 2. Using six specialized API endpoints, each designed to search for only one specific type of information:
<use_cases>
[1]. Links -> refer to [Find Links](#1-find-links)
[2]. Images -> refer to [Find Images](#2-find-images)
[3]. Videos -> refer to [Find Videos](#3-find-videos)
[4]. Locations -> refer to [Find a Local Map](#4-find-a-local-map)
[5]. Knowledge about a topic, structured as a small knowledge base -> refer to [Knowledge about a topic](#5-knowledge-about-a-topic-structured-as-a-small-knowledge-base)
</use_cases>
#### Advanced search syntax
Note that queries can include advanced search syntax:
<note>
1. Search for an exact match: Enter a word or phrase using \" before and after it.
For example, \"tallest building\".
2. Search for a specific site: Enter site: in front of a site or domain.
For example, site:youtube.com cat videos.
3. Exclude words from your search: Enter - in front of a word that you want to leave out.
For example, jaguar speed -car.
</note>
----
## 1. Find Links
<important>
First, you must first call the standard chat completion endpoint with your query.
The chat completion endpoint returns an ID, which must then be passed as the sole input parameter followup_id to the bagoodex/links endpoint below.
</important>
### Example:
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
"content": "site:www.reddit.com AI",
},
],
)
# Extract the ID from the response
gen_id = response.id
print(f"Generated ID: {gen_id}")
# Call the Bagoodex endpoint with the generated ID
get_links(gen_id)
def get_links(gen_id):
params = {'followup_id': gen_id}
headers = {'Authorization': f'Bearer {API_KEY}'}
response = requests.get(f'{API_URL}/v1/bagoodex/links', headers=headers, params=params)
print(response.json())
# Run the function
complete_chat()
```
</code_snippet>
### Model Response:
<response>
```
[
"https://www.reddit.com/r/artificial/",
"https://www.reddit.com/r/ArtificialInteligence/",
"https://www.reddit.com/r/artificial/wiki/getting-started/",
"https://www.reddit.com/r/ChatGPT/comments/1fwt2zf/it_is_officially_over_these_are_all_ai/",
"https://www.reddit.com/r/ArtificialInteligence/comments/1f8wxe7/whats_the_most_surprising_way_ai_has_become_part/",
"https://gist.github.com/nndda/a985daed53283a2c7fd399e11a185b11",
"https://www.reddit.com/r/aivideo/",
"https://www.reddit.com/r/singularity/",
"https://www.abc.net.au/",
"https://www.reddit.com/r/PromptEngineering/"
]
```
</response>
## 2. Find Images
<important>
First, you must first call the standard chat completion endpoint with your query.
The chat completion endpoint returns an ID, which must then be passed as the sole input parameter followup_id to the bagoodex/images endpoint below.
</important>
### Example:
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
"content": "giant dragonflies",
},
],
)
# Extract the ID from the response
gen_id = response.id
print(f"Generated ID: {gen_id}")
# Call the Bagoodex endpoint with the generated ID
get_images(gen_id)
def get_images(gen_id):
params = {'followup_id': gen_id}
headers = {'Authorization': f'Bearer {API_KEY}'}
response = requests.get(f'{API_URL}/v1/bagoodex/images', headers=headers, params=params)
print(response.json())
# Run the function
complete_chat()
```
</code_snippet>
### Model Response:
<response>
```
[
{
"source": "",
"original": "https://images.theconversation.com/files/234118/original/file-20180829-195319-1d4y13t.jpg?ixlib=rb-4.1.0&rect=0%2C7%2C1200%2C790&q=45&auto=format&w=926&fit=clip",
"title": "Paleozoic era's giant dragonflies ...",
"source_name": "The Conversation"
},
{
"source": "",
"original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image3s/000/002/727/large/meganeuropsis-eating-roach_lucas-lima_3x4.jpg?1470033295",
"title": "huge dragonfly ...",
"source_name": "Earth Archives"
},
{
"source": "",
"original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image2s/000/002/727/large/meganeuropsis_lucas-lima_4x3.jpg?1470033293",
"title": "huge dragonfly ...",
"source_name": "Earth Archives"
},
{
"source": "",
"original": "https://static.wikia.nocookie.net/prehistoricparkip/images/3/37/Meganeurid_bbc_prehistoric_.jpg/revision/latest?cb=20120906182204",
"title": "Giant Dragonfly | Prehistoric Park Wiki ...",
"source_name": "Prehistoric Park Wiki - Fandom"
},
{
"source": "",
"original": "https://i.redd.it/rig989kttmc71.jpg",
"title": "This pretty large dragonfly we found ...",
"source_name": "Reddit"
},
{
"source": "",
"original": "https://upload.wikimedia.org/wikipedia/commons/f/fc/Meganeurites_gracilipes_restoration.webp",
"title": "Meganisoptera - Wikipedia",
"source_name": "Wikipedia"
},
{
"source": "",
"original": "https://sites.wustl.edu/monh/files/2019/12/woman-and-meganeura-350x263.jpeg",
"title": "Dragonflies and Damselflies of Missouri ...",
"source_name": "Washington University"
},
{
"source": "",
"original": "http://www.stancsmith.com/uploads/4/8/9/6/48964465/meganeuropsis-giantdragonfly_orig.jpg",
"title": "Ginormous Dragonfly - Stan C ...",
"source_name": "Stan C. Smith"
},
{
"source": "",
"original": "https://static.sciencelearn.org.nz/images/images/000/004/172/original/INSECTS_ITV_Image_map_Aquatic_insects_Dragonfly.jpg?1674173331",
"title": "Bush giant dragonfly — Science ...",
"source_name": "Science Learning Hub"
},
{
"source": "",
"original": "https://i.ytimg.com/vi/ixlQX7lV8dc/sddefault.jpg",
"title": "Meganeura' - The Prehistoric Dragonfly ...",
"source_name": "YouTube"
}
]
```
</response>
## 3. Find Videos
<important>
First, you must first call the standard chat completion endpoint with your query.
The chat completion endpoint returns an ID, which must then be passed as the sole input parameter followup_id to the bagoodex/videos endpoint below.
</important>
### Example:
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
"content": "how to work with github",
},
],
)
# Extract the ID from the response
gen_id = response.id
print(f"Generated ID: {gen_id}")
# Call the Bagoodex endpoint with the generated ID
get_videos(gen_id)
def get_videos(gen_id):
params = {'followup_id': gen_id}
headers = {'Authorization': f'Bearer {API_KEY}'}
response = requests.get(f'{API_URL}/v1/bagoodex/videos', headers=headers, params=params)
print(response.json())
# Run the function
complete_chat()
```
### Model Response:
<response>
```
[
{
"link": "https://www.youtube.com/watch?v=iv8rSLsi1xo",
"thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/_cYAcql_-g0w-video-thumb.jpeg",
"title": "GitHub Tutorial - Beginner's Training Guide"
},
{
"link": "https://www.youtube.com/watch?v=tRZGeaHPoaw",
"thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/-bforsTVDxRQ-video-thumb.jpeg",
"title": "Git and GitHub Tutorial for Beginners"
}
]
```
</response>
## 4. Find a Local Map
<important>
First, you must first call the standard chat completion endpoint with your query.
The chat completion endpoint returns an ID, which must then be passed as the sole input parameter followup_id to the bagoodex/local-map endpoint below:
</important>
### Example:
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
"content": "where is san francisco",
},
],
)
# Extract the ID from the response
gen_id = response.id
print(f"Generated ID: {gen_id}")
# Call the Bagoodex endpoint with the generated ID
get_local_map(gen_id)
def get_local_map(gen_id):
params = {'followup_id': gen_id}
headers = {'Authorization': f'Bearer {API_KEY}'}
response = requests.get(f'{API_URL}/v1/bagoodex/local-map', headers=headers, params=params)
print(response.json())
# Run the function
complete_chat()
```
</code_snippet>
### Model Response:
<response>
```
{
"link": "https://www.google.com/maps/place/San+Francisco,+CA/data=!4m2!3m1!1s0x80859a6d00690021:0x4a501367f076adff?sa=X&ved=2ahUKEwjqg7eNz9KLAxVCFFkFHWSPEeIQ8gF6BAgqEAA&hl=en",
"image": "https://dmwtgq8yidg0m.cloudfront.net/images/TdNFUpcEvvHL-local-map.webp"
}
```
</response>
## 5. Knowledge about a topic, structured as a small knowledge base
<important>
First, you must first call the standard chat completion endpoint with your query.
The chat completion endpoint returns an ID, which must then be passed as the sole input parameter followup_id to the bagoodex/knowledge endpoint below.
</important>
### Example:
<code_snippet>
```py
import requests
from openai import OpenAI
# Insert your AIML API Key instead of <YOUR_API_KEY>:
API_KEY = '<YOUR_API_KEY>'
API_URL = 'https://api.aimlapi.com'
# Call the standart chat completion endpoint to get an ID
def complete_chat():
client = OpenAI(
base_url=API_URL,
api_key=API_KEY,
)
response = client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
{
"role": "user",
"content": "Who is Nicola Tesla",
},
],
)
# Extract the ID from the response
gen_id = response.id
print(f"Generated ID: {gen_id}")
# Call the Bagoodex endpoint with the generated ID
get_knowledge(gen_id)
def get_knowledge(gen_id):
params = {'followup_id': gen_id}
headers = {'Authorization': f'Bearer {API_KEY}'}
response = requests.get(f'{API_URL}/v1/bagoodex/knowledge', headers=headers, params=params)
print(response.json())
# Run the function
complete_chat()
```
</code_snippet>
### Model Response:
<response>
```
{
'title': 'Nikola Tesla',
'type': 'Engineer and futurist',
'description': None,
'born': 'July 10, 1856, Smiljan, Croatia',
'died': 'January 7, 1943 (age 86 years), The New Yorker A Wyndham Hotel, New York, NY'
}
```
</response>
[follow_up]:
Great!
Of course. This initial UI layout meet my expectations for the first step.
Please proceed with other steps as mentioned in the step-by-step guide above.
[follow_up]:
Great! But we need few changes.
0. You forgot submit button. Stretch the UI (chat interface) full height. [always refer to "https://www.gradio.app/guides/creating-a-chatbot-fast" and follow up tutorials].
1. Place Local Map Search and Knowledge Base above input in as a little buttons. They serve as an additional functionality for user query. If user selects one or both of them we should send additional API calls (maybe asynchronous) and return the results. Note that Local Map Search returns Google map url. We should render it instantly in place in the Gradio app. It would be great If we could render inside chat message big field.
2. Seems you forgot Helper Functions and Classes for responses as we know they are static already. To display all the images as a Gallery on click Search Images. and on click we should expand image. ( refer to earlier message for more information and requirements and guidance ).
3. Same applies to Search Videos, we should render them and on Click play instantly in place (NO redirect to YouTube). [( refer to earlier message for more information and requirements and guidance )].
4. and Search Links we should render them accordingly: title then citation. For example: how_to_build_a_sling_at_home_thats_not_shit [here place link to redirect the user]. [( refer to earlier message for more information and requirements and guidance )].
[Search Images]:
<response>
[{'source': '', 'original': 'https://i.ytimg.com/vi/iYlJirFtYaA/sddefault.jpg', 'title': 'How to make a Slingshot using Pencils ...', 'source_name': 'YouTube'}, {'source': '', 'original': 'https://i.ytimg.com/vi/HWSkVaptzRA/maxresdefault.jpg', 'title': 'How to make a Slingshot at Home - YouTube', 'source_name': 'YouTube'}, {'source': '', 'original': 'https://content.instructables.com/FHB/VGF8/FHXUOJKJ/FHBVGF8FHXUOJKJ.jpg?auto=webp', 'title': 'Country Boy" Style Slingshot ...', 'source_name': 'Instructables'}, {'source': '', 'original': 'https://i.ytimg.com/vi/6wXqlJVw03U/maxresdefault.jpg', 'title': 'Make slingshot using popsicle stick ...', 'source_name': 'YouTube'}, {'source': '', 'original': 'https://ds-tc.prod.pbskids.org/designsquad/diy/DESIGN-SQUAD-42.jpg', 'title': 'Build | Indoor Slingshot . DESIGN SQUAD ...', 'source_name': 'PBS KIDS'}, {'source': '', 'original': 'https://i.ytimg.com/vi/wCxFkPLuNyA/maxresdefault.jpg', 'title': 'Paper Ninja Weapons ...', 'source_name': 'YouTube'}, {'source': '', 'original': 'https://i0.wp.com/makezine.com/wp-content/uploads/2015/01/slingshot1.jpg?fit=800%2C600&ssl=1', 'title': 'Rotating Bearings ...', 'source_name': 'Make Magazine'}, {'source': '', 'original': 'https://makeandtakes.com/wp-content/uploads/IMG_1144-1.jpg', 'title': 'Make a DIY Stick Slingshot Kids Craft', 'source_name': 'Make and Takes'}, {'source': '', 'original': 'https://i.ytimg.com/vi/X9oWGuKypuY/maxresdefault.jpg', 'title': 'Easy Home Made Slingshot - YouTube', 'source_name': 'YouTube'}, {'source': '', 'original': 'https://www.wikihow.com/images/thumb/4/41/Make-a-Sling-Shot-Step-7-Version-5.jpg/550px-nowatermark-Make-a-Sling-Shot-Step-7-Version-5.jpg', 'title': 'How to Make a Sling Shot: 15 Steps ...', 'source_name': 'wikiHow'}]
</response>
[Search Videos]:
<response>
Videos:
[{'link': 'https://www.youtube.com/watch?v=X9oWGuKypuY', 'thumbnail': 'https://dmwtgq8yidg0m.cloudfront.net/medium/d3G6HeC5BO93-video-thumb.jpeg', 'title': 'Easy Home Made Slingshot'}, {'link': 'https://www.youtube.com/watch?v=V2iZF8oAXHo&pp=ygUMI2d1bGVsaGFuZGxl', 'thumbnail': 'https://dmwtgq8yidg0m.cloudfront.net/medium/sb2Iw9Ug-Pne-video-thumb.jpeg', 'title': 'Making an Apple Wood Slingshot | Woodcraft'}]
</response>
[Links]:
<response>
['https://www.reddit.com/r/slingshots/comments/1d50p3e/how_to_build_a_sling_at_home_thats_not_shit/', 'https://www.instructables.com/Make-a-Giant-Slingshot/', 'https://www.mudandbloom.com/blog/stick-slingshot', 'https://pbskids.org/designsquad/build/indoor-slingshot/', 'https://www.instructables.com/How-to-Make-a-Slingshot-2/']
</response>
### Local Map Response:
<response>
{
"link": "https://www.google.com/maps/place/San+Francisco,+CA/data=!4m2!3m1!1s0x80859a6d00690021:0x4a501367f076adff?sa=X&ved=2ahUKEwjqg7eNz9KLAxVCFFkFHWSPEeIQ8gF6BAgqEAA&hl=en",
"image": "https://dmwtgq8yidg0m.cloudfront.net/images/TdNFUpcEvvHL-local-map.webp"
}
</response>
[follow_up]:
Great! Everything working really good!!
1. Now let's reprodice using Gradios own specific components for Chat bots and AI applications.
2. We can also simple replace all the helper function that used html and css to Gradio components.
Here's guide i just scraped from their website:
<start_of_guide>
How to Create a Chatbot with Gradio
Introduction
Chatbots are a popular application of large language models (LLMs). Using Gradio, you can easily build a chat application and share that with your users, or try it yourself using an intuitive UI.
This tutorial uses gr.ChatInterface(), which is a high-level abstraction that allows you to create your chatbot UI fast, often with a few lines of Python. It can be easily adapted to support multimodal chatbots, or chatbots that require further customization.
Prerequisites: please make sure you are using the latest version of Gradio:
$ pip install --upgrade gradio
Note for OpenAI-API compatible endpoints
If you have a chat server serving an OpenAI-API compatible endpoint (such as Ollama), you can spin up a ChatInterface in a single line of Python. First, also run pip install openai. Then, with your own URL, model, and optional token:
import gradio as gr
gr.load_chat("http://localhost:11434/v1/", model="llama3.2", token="***").launch()
Read about gr.load_chat in the docs. If you have your own model, keep reading to see how to create an application around any chat model in Python!
Defining a chat function
To create a chat application with gr.ChatInterface(), the first thing you should do is define your chat function. In the simplest case, your chat function should accept two arguments: message and history (the arguments can be named anything, but must be in this order).
message: a str representing the user's most recent message.
history: a list of openai-style dictionaries with role and content keys, representing the previous conversation history. May also include additional keys representing message metadata.
For example, the history could look like this:
[
{"role": "user", "content": "What is the capital of France?"},
{"role": "assistant", "content": "Paris"}
]
while the next message would be:
"And what is its largest city?"
Your chat function simply needs to return:
a str value, which is the chatbot's response based on the chat history and most recent message, for example, in this case:
Paris is also the largest city.
Let's take a look at a few example chat functions:
Example: a chatbot that randomly responds with yes or no
Let's write a chat function that responds Yes or No randomly.
Here's our chat function:
import random
def random_response(message, history):
return random.choice(["Yes", "No"])
Now, we can plug this into gr.ChatInterface() and call the .launch() method to create the web interface:
import gradio as gr
gr.ChatInterface(
fn=random_response,
type="messages"
).launch()
Tip:
Always set type="messages" in gr.ChatInterface. The default value (type="tuples") is deprecated and will be removed in a future version of Gradio.
That's it! Here's our running demo, try it out:
Chatbot
Message
Type a message...
gradio/chatinterface_random_response
built with Gradio.
Hosted on Hugging Face Space Spaces
Example: a chatbot that alternates between agreeing and disagreeing
Of course, the previous example was very simplistic, it didn't take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.
import gradio as gr
def alternatingly_agree(message, history):
if len([h for h in history if h['role'] == "assistant"]) % 2 == 0:
return f"Yes, I do think that: {message}"
else:
return "I don't think so"
gr.ChatInterface(
fn=alternatingly_agree,
type="messages"
).launch()
We'll look at more realistic examples of chat functions in our next Guide, which shows examples of using gr.ChatInterface with popular LLMs.
Streaming chatbots
In your chat function, you can use yield to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!
import time
import gradio as gr
def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.3)
yield "You typed: " + message[: i+1]
gr.ChatInterface(
fn=slow_echo,
type="messages"
).launch()
While the response is streaming, the "Submit" button turns into a "Stop" button that can be used to stop the generator function.
Tip:
Even though you are yielding the latest message at each iteration, Gradio only sends the "diff" of each message from the server to the frontend, which reduces latency and data consumption over your network.
Customizing the Chat UI
If you're familiar with Gradio's gr.Interface class, the gr.ChatInterface includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:
add a title and description above your chatbot using title and description arguments.
add a theme or custom css using theme and css arguments respectively.
add examples and even enable cache_examples, which make your Chatbot easier for users to try it out.
customize the chatbot (e.g. to change the height or add a placeholder) or textbox (e.g. to add a max number of characters or add a placeholder).
Adding examples
You can add preset examples to your gr.ChatInterface with the examples parameter, which takes a list of string examples. Any examples will appear as "buttons" within the Chatbot before any messages are sent. If you'd like to include images or other files as part of your examples, you can do so by using this dictionary format for each example instead of a string: {"text": "What's in this image?", "files": ["cheetah.jpg"]}. Each file will be a separate message that is added to your Chatbot history.
You can change the displayed text for each example by using the example_labels argument. You can add icons to each example as well using the example_icons argument. Both of these arguments take a list of strings, which should be the same length as the examples list.
If you'd like to cache the examples so that they are pre-computed and the results appear instantly, set cache_examples=True.
Customizing the chatbot or textbox component
If you want to customize the gr.Chatbot or gr.Textbox that compose the ChatInterface, then you can pass in your own chatbot or textbox components. Here's an example of how we to apply the parameters we've discussed in this section:
import gradio as gr
def yes_man(message, history):
if message.endswith("?"):
return "Yes"
else:
return "Ask me anything!"
gr.ChatInterface(
yes_man,
type="messages",
chatbot=gr.Chatbot(height=300),
textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7),
title="Yes Man",
description="Ask Yes Man any question",
theme="ocean",
examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"],
cache_examples=True,
).launch()
Here's another example that adds a "placeholder" for your chat interface, which appears before the user has started chatting. The placeholder argument of gr.Chatbot accepts Markdown or HTML:
gr.ChatInterface(
yes_man,
type="messages",
chatbot=gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything"),
...
The placeholder appears vertically and horizontally centered in the chatbot.
Multimodal Chat Interface
You may want to add multimodal capabilities to your chat interface. For example, you may want users to be able to upload images or files to your chatbot and ask questions about them. You can make your chatbot "multimodal" by passing in a single parameter (multimodal=True) to the gr.ChatInterface class.
When multimodal=True, the signature of your chat function changes slightly: the first parameter of your function (what we referred to as message above) should accept a dictionary consisting of the submitted text and uploaded files that looks like this:
{
"text": "user input",
"files": [
"updated_file_1_path.ext",
"updated_file_2_path.ext",
...
]
}
This second parameter of your chat function, history, will be in the same openai-style dictionary format as before. However, if the history contains uploaded files, the content key for a file will be not a string, but rather a single-element tuple consisting of the filepath. Each file will be a separate message in the history. So after uploading two files and asking a question, your history might look like this:
[
{"role": "user", "content": ("cat1.png")},
{"role": "user", "content": ("cat2.png")},
{"role": "user", "content": "What's the difference between these two images?"},
]
The return type of your chat function does not change when setting multimodal=True (i.e. in the simplest case, you should still return a string value). We discuss more complex cases, e.g. returning files below.
If you are customizing a multimodal chat interface, you should pass in an instance of gr.MultimodalTextbox to the textbox parameter. You can customize the MultimodalTextbox further by passing in the sources parameter, which is a list of sources to enable. Here's an example that illustrates how to set up and customize and multimodal chat interface:
import gradio as gr
def count_images(message, history):
num_images = len(message["files"])
total_images = 0
for message in history:
if isinstance(message["content"], tuple):
total_images += 1
return f"You just uploaded {num_images} images, total uploaded: {total_images+num_images}"
demo = gr.ChatInterface(
fn=count_images,
type="messages",
examples=[
{"text": "No files", "files": []}
],
multimodal=True,
textbox=gr.MultimodalTextbox(file_count="multiple", file_types=["image"], sources=["upload", "microphone"])
)
demo.launch()
Additional Inputs
You may want to add additional inputs to your chat function and expose them to your users through the chat UI. For example, you could add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The gr.ChatInterface class supports an additional_inputs parameter which can be used to add additional input components.
The additional_inputs parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. "textbox" instead of gr.Textbox()). If you pass in component instances, and they have not already been rendered, then the components will appear underneath the chatbot within a gr.Accordion().
Here's a complete example:
import gradio as gr
import time
def echo(message, history, system_prompt, tokens):
response = f"System prompt: {system_prompt}\n Message: {message}."
for i in range(min(len(response), int(tokens))):
time.sleep(0.05)
yield response[: i + 1]
demo = gr.ChatInterface(
echo,
type="messages",
additional_inputs=[
gr.Textbox("You are helpful AI.", label="System Prompt"),
gr.Slider(10, 100),
],
)
demo.launch()
If the components you pass into the additional_inputs have already been rendered in a parent gr.Blocks(), then they will not be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the gr.Textbox() on top of the Chatbot UI, while keeping the slider underneath.
import gradio as gr
import time
def echo(message, history, system_prompt, tokens):
response = f"System prompt: {system_prompt}\n Message: {message}."
for i in range(min(len(response), int(tokens))):
time.sleep(0.05)
yield response[: i+1]
with gr.Blocks() as demo:
system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt")
slider = gr.Slider(10, 100, render=False)
gr.ChatInterface(
echo, additional_inputs=[system_prompt, slider], type="messages"
)
demo.launch()
Examples with additional inputs
You can also add example values for your additional inputs. Pass in a list of lists to the examples parameter, where each inner list represents one sample, and each inner list should be 1 + len(additional_inputs) long. The first element in the inner list should be the example value for the chat message, and each subsequent element should be an example value for one of the additional inputs, in order. When additional inputs are provided, examples are rendered in a table underneath the chat interface.
If you need to create something even more custom, then its best to construct the chatbot UI using the low-level gr.Blocks() API. We have a dedicated guide for that here.
Additional Outputs
In the same way that you can accept additional inputs into your chat function, you can also return additional outputs. Simply pass in a list of components to the additional_outputs parameter in gr.ChatInterface and return additional values for each component from your chat function. Here's an example that extracts code and outputs it into a separate gr.Code component:
import gradio as gr
python_code = """
def fib(n):
if n <= 0:
return 0
elif n == 1:
return 1
else:
return fib(n-1) + fib(n-2)
"""
js_code = """
function fib(n) {
if (n <= 0) return 0;
if (n === 1) return 1;
return fib(n - 1) + fib(n - 2);
}
"""
def chat(message, history):
if "python" in message.lower():
return "Type Python or JavaScript to see the code.", gr.Code(language="python", value=python_code)
elif "javascript" in message.lower():
return "Type Python or JavaScript to see the code.", gr.Code(language="javascript", value=js_code)
else:
return "Please ask about Python or JavaScript.", None
with gr.Blocks() as demo:
code = gr.Code(render=False)
with gr.Row():
with gr.Column():
gr.Markdown("<center><h1>Write Python or JavaScript</h1></center>")
gr.ChatInterface(
chat,
examples=["Python", "JavaScript"],
additional_outputs=[code],
type="messages"
)
with gr.Column():
gr.Markdown("<center><h1>Code Artifacts</h1></center>")
code.render()
demo.launch()
Note: unlike the case of additional inputs, the components passed in additional_outputs must be already defined in your gr.Blocks context -- they are not rendered automatically. If you need to render them after your gr.ChatInterface, you can set render=False when they are first defined and then .render() them in the appropriate section of your gr.Blocks() as we do in the example above.
Returning Complex Responses
We mentioned earlier that in the simplest case, your chat function should return a str response, which will be rendered as Markdown in the chatbot. However, you can also return more complex responses as we discuss below:
Returning files or Gradio components
Currently, the following Gradio components can be displayed inside the chat interface:
gr.Image
gr.Plot
gr.Audio
gr.HTML
gr.Video
gr.Gallery
gr.File
Simply return one of these components from your function to use it with gr.ChatInterface. Here's an example that returns an audio file:
import gradio as gr
def music(message, history):
if message.strip():
return gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav")
else:
return "Please provide the name of an artist"
gr.ChatInterface(
music,
type="messages",
textbox=gr.Textbox(placeholder="Which artist's music do you want to listen to?", scale=7),
).launch()
Similarly, you could return image files with gr.Image, video files with gr.Video, or arbitrary files with the gr.File component.
Returning Multiple Messages
You can return multiple assistant messages from your chat function simply by returning a list of messages, each of which is a valid chat type. This lets you, for example, send a message along with files, as in the following example:
import gradio as gr
def echo_multimodal(message, history):
response = []
response.append("You wrote: '" + message["text"] + "' and uploaded:")
if message.get("files"):
for file in message["files"]:
response.append(gr.File(value=file))
return response
demo = gr.ChatInterface(
echo_multimodal,
type="messages",
multimodal=True,
textbox=gr.MultimodalTextbox(file_count="multiple"),
)
demo.launch()
Displaying intermediate thoughts or tool usage
The gr.ChatInterface class supports displaying intermediate thoughts or tool usage direct in the chatbot.
To do this, you will need to return a gr.ChatMessage object from your chat function. Here is the schema of the gr.ChatMessage data class as well as two internal typed dictionaries:
@dataclass
class ChatMessage:
content: str | Component
metadata: MetadataDict = None
options: list[OptionDict] = None
class MetadataDict(TypedDict):
title: NotRequired[str]
id: NotRequired[int | str]
parent_id: NotRequired[int | str]
log: NotRequired[str]
duration: NotRequired[float]
status: NotRequired[Literal["pending", "done"]]
class OptionDict(TypedDict):
label: NotRequired[str]
value: str
As you can see, the gr.ChatMessage dataclass is similar to the openai-style message format, e.g. it has a "content" key that refers to the chat message content. But it also includes a "metadata" key whose value is a dictionary. If this dictionary includes a "title" key, the resulting message is displayed as an intermediate thought with the title being displayed on top of the thought. Here's an example showing the usage:
import gradio as gr
from gradio import ChatMessage
import time
sleep_time = 0.5
def simulate_thinking_chat(message, history):
start_time = time.time()
response = ChatMessage(
content="",
metadata={"title": "_Thinking_ step-by-step", "id": 0, "status": "pending"}
)
yield response
thoughts = [
"First, I need to understand the core aspects of the query...",
"Now, considering the broader context and implications...",
"Analyzing potential approaches to formulate a comprehensive answer...",
"Finally, structuring the response for clarity and completeness..."
]
accumulated_thoughts = ""
for thought in thoughts:
time.sleep(sleep_time)
accumulated_thoughts += f"- {thought}\n\n"
response.content = accumulated_thoughts.strip()
yield response
response.metadata["status"] = "done"
response.metadata["duration"] = time.time() - start_time
yield response
response = [
response,
ChatMessage(
content="Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer."
)
]
yield response
demo = gr.ChatInterface(
simulate_thinking_chat,
title="Thinking LLM Chat Interface 🤔",
type="messages",
)
demo.launch()
You can even show nested thoughts, which is useful for agent demos in which one tool may call other tools. To display nested thoughts, include "id" and "parent_id" keys in the "metadata" dictionary. Read our dedicated guide on displaying intermediate thoughts and tool usage for more realistic examples.
Providing preset responses
When returning an assistant message, you may want to provide preset options that a user can choose in response. To do this, again, you will again return a gr.ChatMessage instance from your chat function. This time, make sure to set the options key specifying the preset responses.
As shown in the schema for gr.ChatMessage above, the value corresponding to the options key should be a list of dictionaries, each with a value (a string that is the value that should be sent to the chat function when this response is clicked) and an optional label (if provided, is the text displayed as the preset response instead of the value).
This example illustrates how to use preset responses:
import gradio as gr
import random
example_code = """
Here's an example Python lambda function:
lambda x: x + {}
Is this correct?
"""
def chat(message, history):
if message == "Yes, that's correct.":
return "Great!"
else:
return gr.ChatMessage(
content=example_code.format(random.randint(1, 100)),
options=[
{"value": "Yes, that's correct.", "label": "Yes"},
{"value": "No"}
]
)
demo = gr.ChatInterface(
chat,
type="messages",
examples=["Write an example Python lambda function."]
)
demo.launch()
Modifying the Chatbot Value Directly
You may wish to modify the value of the chatbot with your own events, other than those prebuilt in the gr.ChatInterface. For example, you could create a dropdown that prefills the chat history with certain conversations or add a separate button to clear the conversation history. The gr.ChatInterface supports these events, but you need to use the gr.ChatInterface.chatbot_value as the input or output component in such events. In this example, we use a gr.Radio component to prefill the the chatbot with certain conversations:
import gradio as gr
import random
def prefill_chatbot(choice):
if choice == "Greeting":
return [
{"role": "user", "content": "Hi there!"},
{"role": "assistant", "content": "Hello! How can I assist you today?"}
]
elif choice == "Complaint":
return [
{"role": "user", "content": "I'm not happy with the service."},
{"role": "assistant", "content": "I'm sorry to hear that. Can you please tell me more about the issue?"}
]
else:
return []
def random_response(message, history):
return random.choice(["Yes", "No"])
with gr.Blocks() as demo:
radio = gr.Radio(["Greeting", "Complaint", "Blank"])
chat = gr.ChatInterface(random_response, type="messages")
radio.change(prefill_chatbot, radio, chat.chatbot_value)
demo.launch()
Using Your Chatbot via API
Once you've built your Gradio chat interface and are hosting it on Hugging Face Spaces or somewhere else, then you can query it with a simple API at the /chat endpoint. The endpoint just expects the user's message and will return the response, internally keeping track of the message history.
To use the endpoint, you should use either the Gradio Python Client or the Gradio JS client. Or, you can deploy your Chat Interface to other platforms, such as a:
Discord bot [tutorial]
Slack bot [tutorial]
Website widget [tutorial]
Chat History
You can enable persistent chat history for your ChatInterface, allowing users to maintain multiple conversations and easily switch between them. When enabled, conversations are stored locally and privately in the user's browser using local storage. So if you deploy a ChatInterface e.g. on Hugging Face Spaces, each user will have their own separate chat history that won't interfere with other users' conversations. This means multiple users can interact with the same ChatInterface simultaneously while maintaining their own private conversation histories.
To enable this feature, simply set gr.ChatInterface(save_history=True) (as shown in the example in the next section). Users will then see their previous conversations in a side panel and can continue any previous chat or start a new one.
Collecting User Feedback
To gather feedback on your chat model, set gr.ChatInterface(flagging_mode="manual") and users will be able to thumbs-up or thumbs-down assistant responses. Each flagged response, along with the entire chat history, will get saved in a CSV file in the app working directory (this can be configured via the flagging_dir parameter).
You can also change the feedback options via flagging_options parameter. The default options are "Like" and "Dislike", which appear as the thumbs-up and thumbs-down icons. Any other options appear under a dedicated flag icon. This example shows a ChatInterface that has both chat history (mentioned in the previous section) and user feedback enabled:
import time
import gradio as gr
def slow_echo(message, history):
for i in range(len(message)):
time.sleep(0.05)
yield "You typed: " + message[: i + 1]
demo = gr.ChatInterface(
slow_echo,
type="messages",
flagging_mode="manual",
flagging_options=["Like", "Spam", "Inappropriate", "Other"],
save_history=True,
)
demo.launch()
Note that in this example, we set several flagging options: "Like", "Spam", "Inappropriate", "Other". Because the case-sensitive string "Like" is one of the flagging options, the user will see a thumbs-up icon next to each assistant message. The three other flagging options will appear in a dropdown under the flag icon.
What's Next?
Now that you've learned about the gr.ChatInterface class and how it can be used to create chatbot UIs quickly, we recommend reading one of the following:
Our next Guide shows examples of how to use gr.ChatInterface with popular LLM libraries.
If you'd like to build very custom chat applications from scratch, you can build them using the low-level Blocks API, as discussed in this Guide.
Once you've deployed your Gradio Chat Interface, its easy to use it other applications because of the built-in API. Here's a tutorial on how to deploy a Gradio chat interface as a Discord bot.
<end_of_guide>
[follow_up]:
No make sure to keep the same UI as before in two columns. Place Knowledge Base and Local Map above grade chatinterface input and make them checkbox. when they checked we will do additional async request to the corresponding API. and display one by one separated messages in side chat interface. on the right side leave the Search Images, Videos, and Links.
[follow_up]:
<example_return_response>
If you're planning to visit Paris, there are several options to consider for a 2-week trip.
You can start by exploring the surrounding areas of Paris, such as Versailles and Giverny, Monet's Garden, which are easily accessible by public transportation.
Another option is to visit the Loire Valley, which is known for its beautiful chateaux, such as Chambord, Chenonceau, and Tours. However, renting a car from Paris might be challenging without a driver's license, and the cost of an automatic rental might be out of your budget.
Normandy is another region worth considering, with its D-day beaches, Bayeux tapestry, and Mont St Michel. However, this region is also best explored by car, and past reviews of local tours have been disappointing.
Alsace is a beautiful region with the city of Strasbourg, which is highly recommended. However, it might be a bit out of the way, and a 1-2 day trip might not be enough to fully experience the region.
Provence is another option, with its charming cities like Montpellier, Marseille, Nice, and St Tropex. However, this region is also best explored by car, and it might be more enjoyable if you're on a honeymoon or have more time to stay around the sea.
To get a better idea of the best combination of cities and regions to visit with Paris, you can check out the following resources:
https://www.fodors.com/community/europe/pls-suggest-combination-of-paris-and-which-other-cities-regions-for-2-week-trip-592935/
https://www.fodors.com/community/europe/pls-suggest-combination-of-paris-and-which-other-cities-regions-for-2-week-trip-592935/#post15592935
https://www.fodors.com/community/europe/pls-suggest-combination-of-paris-and-which-other-cities-regions-for-2-week-trip-592935/#post15592935
</example_return_response>
[follow_up]:
[follow_up]:
1. Update and Refactor codebase of the following Gradio App.
2. Use helpers from `helpers.py`. (e.g., embed_video, embed_image, format_links, embed_google_map, format_knowledge, and )
3. Implement follow up questions to be displayed after each conversation below input field. on click any of them: question should be added to the ChatInterface conversation. 3.1. Send request to `chat_function`.
4. List follow up questions instantly after getting response from `def chat_function // client.complete_chat(message) //`.
5. if there is any issues onClick local map. Then we should send request to `get_places`.
6. We are not pushing you to append `local map` and `knowledge base` response to the ChatInterface. You are all free to display them separately as we are displaying the `Search Images` and `Search Videos` in different cards interfaces.
####### Gradio App #######
[app.py]:
<code_snippet>
import os
import requests
import gradio as gr
from bagoodex_client import BagoodexClient
from r_types import ChatMessage
from prompts import SYSTEM_PROMPT_FOLLOWUP, SYSTEM_PROMPT_MAP, SYSTEM_PROMPT_BASE
from helpers import format_followup_questions
client = BagoodexClient()
def format_knowledge(result):
title = result.get('title', 'Unknown')
type_ = result.get('type', '')
born = result.get('born', '')
died = result.get('died', '')
content = f"""
**{title}**
Type: {type_}
Born: {born}
Died: {died}
"""
return gr.Markdown(content)
def format_images(result):
urls = [item.get("original", "") for item in result]
return urls
# Helper formatting functions
def format_videos(result):
return [vid.get('link', '') for vid in result]
# Advanced search functions
def perform_video_search(followup_id):
if not followup_id:
return []
result = client.get_videos(followup_id)
return format_videos(result)
def format_links(result):
links_md = "**Links:**\n"
for url in result:
title = url.rstrip('/').split('/')[-1]
links_md += f"- [{title}]({url})\n"
return gr.Markdown(links_md)
# Define the chat function
def chat_function(message, history, followup_id):
followup_id_new, answer = client.complete_chat(message)
return answer, followup_id_new
def format_local_map(result):
link = result.get('link', '')
image_url = result.get('image', '')
html = f"""
<div>
<strong>Local Map:</strong><br>
<a href='{link}' target='_blank'>View on Google Maps</a><br>
<img src='{image_url}' style='width:100%;'/>
</div>
"""
return gr.HTML(html)
def append_local_map(followup_id, chatbot_value):
if not followup_id:
return chatbot_value
result = client.get_local_map(followup_id)
formatted = format_local_map(result)
new_message = {"role": "assistant", "content": formatted}
return chatbot_value + [new_message]
def append_knowledge(followup_id, chatbot_value):
if not followup_id:
return chatbot_value
result = client.get_knowledge(followup_id)
formatted = format_knowledge(result)
new_message = {"role": "assistant", "content": formatted}
return chatbot_value + [new_message]
# Define advanced search functions
def perform_image_search(followup_id):
if not followup_id:
return []
result = client.get_images(followup_id)
urls = format_images(result)
return urls
def perform_links_search(followup_id):
if not followup_id:
return gr.Markdown("No followup ID available.")
result = client.get_links(followup_id)
return format_links(result)
# Custom CSS
css = """
#chatbot {
height: 100%;
}
"""
def list_followup_questions(followup_id):
if not followup_id:
return gr.Markdown("No followup ID available.")
result = client.base_qna(messages=chat, system_prompt=SYSTEM_PROMPT_FOLLOWUP)
return format_followup_questions(result)
def get_places(followup_id):
if not followup_id:
return gr.Markdown("No followup ID available.")
result = client.base_qna(messages=chat, system_prompt=SYSTEM_PROMPT_MAP)
return format_places(result)
# Build UI
with gr.Blocks(css=css, fill_height=True) as demo:
followup_state = gr.State(None)
with gr.Row():
with gr.Column(scale=3):
with gr.Row():
btn_local_map = gr.Button("Local Map Search", variant="secondary", size="sm")
btn_knowledge = gr.Button("Knowledge Base", variant="secondary", size="sm")
chat = gr.ChatInterface(
fn=chat_function,
type="messages",
additional_inputs=[followup_state],
additional_outputs=[followup_state],
)
# Wire up the buttons to append to chat history
btn_local_map.click(
append_local_map,
inputs=[followup_state, chat.chatbot],
outputs=chat.chatbot
)
btn_knowledge.click(
append_knowledge,
inputs=[followup_state, chat.chatbot],
outputs=chat.chatbot
)
with gr.Column(scale=1):
gr.Markdown("### Advanced Search Options")
with gr.Column(variant="panel"):
btn_images = gr.Button("Search Images")
btn_videos = gr.Button("Search Videos")
btn_links = gr.Button("Search Links")
gallery_output = gr.Gallery(label="Image Results", columns=2)
video_output = gr.Gallery(label="Video Results", columns=1, visible=True)
links_output = gr.Markdown(label="Links Results")
btn_images.click(
perform_image_search,
inputs=[followup_state],
outputs=[gallery_output]
)
btn_videos.click(
perform_video_search,
inputs=[followup_state],
outputs=[video_output]
)
btn_links.click(
perform_links_search,
inputs=[followup_state],
outputs=[links_output]
)
demo.launch()
</code_snippet>
[helper.py]:
<code_snippet>
# old code helpers as it was earlier.
# embed_video,
# embed_image,
# format_links,
# embed_google_map,
# format_knowledge
# newly added. Note: fix it (as you did earlier with other helpers) if `format_followup_questions` has any issues.
def format_followup_questions(questions: List[str]) -> str:
"""
Given a list of follow-up questions, return a Markdown string
with each question as a bulleted list item.
"""
if not questions:
return "No follow-up questions provided."
questions_md = "### Follow-up Questions\n\n"
for question in questions:
questions_md += f"- {question}\n"
return questions_md
</code_snippet>
[follow_up]:
Implement Parsing:
Make sure to extract the data and parse it properly:
----------
def format_followup_questions(questions) -> str:
"""
questions are exactly same as this:
json
{
"followup_question": ["What materials are needed to make a slingshot?", "How to make a slingshot more powerful?"]
}
"""
if not questions:
return "No follow-up questions provided."
questions_md = "### Follow-up Questions\n\n"
for question in questions:
questions_md += f"- {question}\n"
return questions_md
[follow_up]:
i was lazy to put them
make sure to remove the " json " before parsing.
[follow_up]:
No need to display two times follow up questions. Remove second one. Radio buttons enough.
---------------------
# Below the chat input, display follow-up questions and let user select one.
followup_radio = gr.Radio(
choices=[], label="Follow-up Questions (select one and click Send Follow-up)"
)
btn_send_followup = gr.Button("Send Follow-up")
# When a follow-up question is sent, update the chat conversation, followup state, and follow-up list.
btn_send_followup.click(
fn=handle_followup_click,
inputs=[followup_radio, followup_state, chat_history_state],
outputs=[chat.chatbot, followup_state, followup_md_state]
)
# Also display the follow-up questions markdown (for reference) in a Markdown component.
followup_markdown = gr.Markdown(label="Follow-up Questions", value="", visible=True)
# When the followup_md_state updates, also update the radio choices.
def update_followup_radio(md_text):
# Assume the helper output is a Markdown string with list items.
# We split the text to extract the question lines.
lines = md_text.splitlines()
questions = []
for line in lines:
if line.startswith("- "):
questions.append(line[2:])
return gr.update(choices=questions, value=None), md_text
followup_md_state.change(
fn=update_followup_radio,
inputs=[followup_md_state],
outputs=[followup_radio, followup_markdown]
)
[follow_up]:
[follow_up]:
[follow_up]:
----
[Tutorial]:
Okey. I just build an app. It's "Bagoodex Web Search" an open source implementation of Perplexity like app.
Next thing: I will provide with the all files and implementations and informations that i have used while building the app. You need to write step-by-step tutorial. All the file names exactly matches text inside square brackets. For example: [app.py].
[bagoodex_client.py]
<|START|>
```
import os
import requests
from openai import OpenAI
from dotenv import load_dotenv
from r_types import ChatMessage
from prompts import SYSTEM_PROMPT_BASE, SYSTEM_PROMPT_MAP
from typing import List
load_dotenv()
API_KEY = os.getenv("AIML_API_KEY")
API_URL = "https://api.aimlapi.com"
class BagoodexClient:
def __init__(self, api_key=API_KEY, api_url=API_URL):
self.api_key = api_key
self.api_url = api_url
self.client = OpenAI(base_url=self.api_url, api_key=self.api_key)
def complete_chat(self, query):
"""
Calls the standard chat completion endpoint using the provided query.
Returns the generated followup ID and the text response.
"""
response = self.client.chat.completions.create(
model="bagoodex/bagoodex-search-v1",
messages=[
ChatMessage(role="user", content=SYSTEM_PROMPT_BASE),
ChatMessage(role="user", content=query)
],
)
followup_id = response.id # the unique ID for follow-up searches
answer = response.choices[0].message.content
return followup_id, answer
def base_qna(self, messages: List[ChatMessage], system_prompt=SYSTEM_PROMPT_BASE):
response = self.client.chat.completions.create(
model="gpt-4o",
messages=[
ChatMessage(role="user", content=system_prompt),
*messages
],
)
return response.choices[0].message.content
def get_links(self, followup_id):
headers = {"Authorization": f"Bearer {self.api_key}"}
params = {"followup_id": followup_id}
response = requests.get(
f"{self.api_url}/v1/bagoodex/links", headers=headers, params=params
)
return response.json()
def get_images(self, followup_id):
headers = {"Authorization": f"Bearer {self.api_key}"}
params = {"followup_id": followup_id}
response = requests.get(
f"{self.api_url}/v1/bagoodex/images", headers=headers, params=params
)
return response.json()
def get_videos(self, followup_id):
headers = {"Authorization": f"Bearer {self.api_key}"}
params = {"followup_id": followup_id}
response = requests.get(
f"{self.api_url}/v1/bagoodex/videos", headers=headers, params=params
)
return response.json()
def get_local_map(self, followup_id):
headers = {"Authorization": f"Bearer {self.api_key}"}
params = {"followup_id": followup_id}
response = requests.get(
f"{self.api_url}/v1/bagoodex/local-map", headers=headers, params=params
)
return response.json()
def get_knowledge(self, followup_id):
headers = {"Authorization": f"Bearer {self.api_key}"}
params = {"followup_id": followup_id}
response = requests.get(
f"{self.api_url}/v1/bagoodex/knowledge", headers=headers, params=params
)
return response.json()
```
<|END|>
----
[app.py]
<|START|>
```
import os
import gradio as gr
from bagoodex_client import BagoodexClient
from r_types import ChatMessage
from prompts import (
SYSTEM_PROMPT_FOLLOWUP,
SYSTEM_PROMPT_MAP,
SYSTEM_PROMPT_BASE,
SYSTEM_PROMPT_KNOWLEDGE_BASE
)
from helpers import (
embed_video,
format_links,
embed_google_map,
format_knowledge,
format_followup_questions
)
client = BagoodexClient()
# ----------------------------
# Chat & Follow-up Functions
# ----------------------------
def chat_function(message, history, followup_state, chat_history_state):
"""
Process a new user message.
Appends the message and response to the conversation,
and retrieves follow-up questions.
"""
# complete_chat returns a new followup id and answer
followup_id_new, answer = client.complete_chat(message)
# Update conversation history (if history is None, use an empty list)
if history is None:
history = []
updated_history = history + [ChatMessage({"role": "user", "content": message}),
ChatMessage({"role": "assistant", "content": answer})]
# Retrieve follow-up questions using the updated conversation
followup_questions_raw = client.base_qna(
messages=updated_history, system_prompt=SYSTEM_PROMPT_FOLLOWUP
)
# Format them using the helper
followup_md = format_followup_questions(followup_questions_raw)
return answer, followup_id_new, updated_history, followup_md
def handle_followup_click(question, followup_state, chat_history_state):
"""
When a follow-up question is clicked, send it as a new message.
"""
if not question:
return chat_history_state, followup_state, ""
# Process the follow-up question via complete_chat
followup_id_new, answer = client.complete_chat(question)
updated_history = chat_history_state + [ChatMessage({"role": "user", "content": question}),
ChatMessage({"role": "assistant", "content": answer})]
# Get new follow-up questions
followup_questions_raw = client.base_qna(
messages=updated_history, system_prompt=SYSTEM_PROMPT_FOLLOWUP
)
followup_md = format_followup_questions(followup_questions_raw)
return updated_history, followup_id_new, followup_md
def handle_local_map_click(followup_state, chat_history_state):
"""
On local map click, try to get a local map.
If issues occur, fall back to using the SYSTEM_PROMPT_MAP.
"""
if not followup_state:
return chat_history_state
try:
result = client.get_local_map(followup_state)
if result:
map_url = result.get('link', '')
# Use helper to produce an embedded map iframe
html = embed_google_map(map_url)
# Fall back: use the base_qna call with SYSTEM_PROMPT_MAP
result = client.base_qna(
messages=chat_history_state, system_prompt=SYSTEM_PROMPT_MAP
)
# Assume result contains a 'link' field
html = embed_google_map(result.get('link', ''))
new_message = ChatMessage({"role": "assistant", "content": html})
return chat_history_state + [new_message]
except Exception:
return chat_history_state
def handle_knowledge_click(followup_state, chat_history_state):
"""
On knowledge base click, fetch and format knowledge content.
"""
if not followup_state:
return chat_history_state
try:
print('trying to get knowledge')
result = client.get_knowledge(followup_state)
knowledge_md = format_knowledge(result)
if knowledge_md == 0000:
print('falling back to base_qna')
# Fall back: use the base_qna call with SYSTEM_PROMPT_KNOWLEDGE_BASE
result = client.base_qna(
messages=chat_history_state, system_prompt=SYSTEM_PROMPT_KNOWLEDGE_BASE
)
knowledge_md = format_knowledge(result)
new_message = ChatMessage({"role": "assistant", "content": knowledge_md})
return chat_history_state + [new_message]
except Exception:
return chat_history_state
# ----------------------------
# Advanced Search Functions
# ----------------------------
def perform_image_search(followup_state):
if not followup_state:
return []
result = client.get_images(followup_state)
# For images we simply return a list of original URLs
return [item.get("original", "") for item in result]
def perform_video_search(followup_state):
if not followup_state:
return "<p>No followup ID available.</p>"
result = client.get_videos(followup_state)
# Use the helper to produce the embed iframes (supports multiple videos)
return embed_video(result)
def perform_links_search(followup_state):
if not followup_state:
return gr.Markdown("No followup ID available.")
result = client.get_links(followup_state)
return format_links(result)
# ----------------------------
# UI Build
# ----------------------------
css = """
#chatbot {
height: 100%;
}
h1, h2, h3, h4, h5, h6 {
text-align: center;
display: block;
}
"""
# like chatgpt, but with less features. built by @theo and @r_marked
# defautl query: how to make slingshot?
# who created light (e.g., electricity) Tesla or Edison in quick short?
with gr.Blocks(css=css, fill_height=True) as demo:
gr.Markdown("""
## like perplexity, but with less features.
#### built by [@abdibrokhim](https://yaps.gg).
""")
# State variables to hold followup ID and conversation history, plus follow-up questions text
followup_state = gr.State(None)
chat_history_state = gr.State([]) # holds conversation history as a list of messages
followup_md_state = gr.State("") # holds follow-up questions as Markdown text
with gr.Row():
with gr.Column(scale=3):
with gr.Row():
btn_local_map = gr.Button("Local Map Search (coming soon...)", variant="secondary", size="sm", interactive=False)
btn_knowledge = gr.Button("Knowledge Base (coming soon...)", variant="secondary", size="sm", interactive=False)
# The ChatInterface now uses additional outputs for both followup_state and conversation history,
# plus follow-up questions Markdown.
chat = gr.ChatInterface(
fn=chat_function,
type="messages",
additional_inputs=[followup_state, chat_history_state],
additional_outputs=[followup_state, chat_history_state, followup_md_state],
)
# Button callbacks to append local map and knowledge base results to chat
btn_local_map.click(
fn=handle_local_map_click,
inputs=[followup_state, chat_history_state],
outputs=chat.chatbot
)
btn_knowledge.click(
fn=handle_knowledge_click,
inputs=[followup_state, chat_history_state],
outputs=chat.chatbot
)
# Radio-based follow-up questions
followup_radio = gr.Radio(
choices=[],
label="Follow-up Questions (select one and click 'Send Follow-up')"
)
btn_send_followup = gr.Button("Send Follow-up")
# When the user clicks "Send Follow-up", the selected question is passed
# to handle_followup_click
btn_send_followup.click(
fn=handle_followup_click,
inputs=[followup_radio, followup_state, chat_history_state],
outputs=[chat.chatbot, followup_state, followup_md_state]
)
# Update the radio choices when followup_md_state changes
def update_followup_radio(md_text):
"""
Parse Markdown lines to extract questions starting with '- '.
"""
lines = md_text.splitlines()
questions = []
for line in lines:
if line.startswith("- "):
questions.append(line[2:])
return gr.update(choices=questions, value=None)
followup_md_state.change(
fn=update_followup_radio,
inputs=[followup_md_state],
outputs=[followup_radio]
)
with gr.Column(scale=1):
gr.Markdown("### Advanced Search Options")
with gr.Column(variant="panel"):
btn_images = gr.Button("Search Images")
btn_videos = gr.Button("Search Videos")
btn_links = gr.Button("Search Links")
gallery_output = gr.Gallery(label="Image Results", columns=2)
video_output = gr.HTML(label="Video Results") # HTML for embedded video iframes
links_output = gr.Markdown(label="Links Results")
btn_images.click(
fn=perform_image_search,
inputs=[followup_state],
outputs=[gallery_output]
)
btn_videos.click(
fn=perform_video_search,
inputs=[followup_state],
outputs=[video_output]
)
btn_links.click(
fn=perform_links_search,
inputs=[followup_state],
outputs=[links_output]
)
demo.launch()
```
<|END|>
----
[helpers.py]
<|START|>
```
from dotenv import load_dotenv
import os
import gradio as gr
import urllib.parse
import re
from pytube import YouTube
from typing import List, Optional, Dict
from r_types import (
SearchVideosResponse,
SearchImagesResponse,
SearchLinksResponse,
LocalMapResponse,
KnowledgeBaseResponse
)
import json
def get_video_id(url: str) -> Optional[str]:
"""
Safely retrieve the YouTube video_id from a given URL using pytube.
Returns None if the URL is invalid or an error occurs.
"""
if not url:
return None
try:
yt = YouTube(url)
return yt.video_id
except Exception:
# If the URL is invalid or pytube fails, return None
return None
def embed_video(videos: List[SearchVideosResponse]) -> str:
"""
Given a list of video data (with 'link' and 'title'),
returns an HTML string of embedded YouTube iframes.
"""
if not videos:
return "<p>No videos found.</p>"
# Collect each iframe snippet
iframes = []
for video in videos:
url = video.get("link", "")
video_id = get_video_id(url)
if not video_id:
# Skip invalid or non-parsable links
continue
title = video.get("title", "").replace('"', '\\"') # Escape quotes
iframe = f"""
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/{video_id}"
title="{title}"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen>
</iframe>
"""
iframes.append(iframe)
# If no valid videos after processing, return a fallback message
if not iframes:
return "<p>No valid YouTube videos found.</p>"
# Join all iframes into one HTML string
return "\n".join(iframes)
def get_video_thumbnail(videos: List[SearchVideosResponse]) -> str:
pass
def format_links(links) -> str:
"""
Convert a list of {'title': str, 'link': str} objects
into a bulleted Markdown string with clickable links.
"""
if not links:
return "No links found."
links_md = "**Links:**\n"
for url in links:
title = url.rstrip('/').split('/')[-1]
links_md += f"- [{title}]({url})\n"
return links_md
def embed_google_map(map_url: str) -> str:
"""
Extracts a textual location from the given Google Maps URL
and returns an embedded Google Map iframe for that location.
Assumes you have a valid API key in place of 'YOUR_API_KEY'.
"""
load_dotenv()
GOOGLE_MAPS_API_KEY = os.getenv("GOOGLE_MAPS_API_KEY")
if not map_url:
return "<p>Invalid Google Maps URL.</p>"
# Attempt to extract "San+Francisco,+CA" from the URL
match = re.search(r"/maps/place/([^/]+)", map_url)
if not match:
return "Invalid Google Maps URL. Could not extract location."
location_text = match.group(1)
# Remove query params or additional slashes from the captured group
location_text = re.split(r"[/?]", location_text)[0]
# URL-encode location to avoid issues with special characters
encoded_location = urllib.parse.quote(location_text, safe="")
embed_html = f"""
<iframe
width="600"
height="450"
style="border:0"
loading="lazy"
allowfullscreen
src="https://www.google.com/maps/embed/v1/place?key={GOOGLE_MAPS_API_KEY}&q={encoded_location}">
</iframe>
"""
return embed_html
def format_knowledge(raw_result: str) -> str:
"""
Given a dictionary of knowledge data (e.g., about a person),
produce a Markdown string summarizing that info.
"""
if not raw_result:
return 0000
# Clean up the raw JSON string
clean_json_str = cleanup_raw_json(raw_result)
print('Knowledge Data: ', clean_json_str)
try:
# Parse the cleaned JSON string
result = json.loads(clean_json_str)
title = result.get("title", "...")
type_ = result.get("type", "...")
born = result.get("born", "...")
died = result.get("died", "...")
content = f"""
**{title}**
Type: {type_}
Born: {born}
Died: {died}
"""
return content
except json.JSONDecodeError:
return "Error: Failed to parse knowledge data."
def format_followup_questions(raw_questions: str) -> str:
"""
Extracts and formats follow-up questions from a raw JSON-like string.
The input string may contain triple backticks (```json ... ```) which need to be removed before parsing.
Expected input format:
```json
{
"followup_question": [
"What materials are needed to make a slingshot?",
"How to make a slingshot more powerful?"
]
}
```
Returns a Markdown-formatted string with the follow-up questions.
"""
if not raw_questions:
return "No follow-up questions available."
# Clean up the raw JSON string
clean_json_str = cleanup_raw_json(raw_questions)
try:
# Parse the cleaned JSON string
questions_dict = json.loads(clean_json_str)
# Ensure the expected key exists
followup_list = questions_dict.get("followup_question", [])
if not isinstance(followup_list, list) or not followup_list:
return "No follow-up questions available."
# Format the questions into Markdown
questions_md = "### Follow-up Questions\n\n"
for question in followup_list:
questions_md += f"- {question}\n"
return questions_md
except json.JSONDecodeError:
return "Error: Failed to parse follow-up questions."
def cleanup_raw_json(raw_json: str) -> str:
"""
Remove triple backticks and 'json' from the beginning and end of a raw JSON string.
"""
return re.sub(r"```json|```", "", raw_json).strip()
```
<|END|>
----
[prompts.py]
<|START|>
```
SYSTEM_PROMPT_BASE = """<system_instructions for ai goes here. please skip this. i will complete it myself.>"""
SYSTEM_PROMPT_MAP = """<system_instructions for ai goes here. please skip this. i will complete it myself.>"""
SYSTEM_PROMPT_FOLLOWUP = """<system_instructions for ai goes here. please skip this. i will complete it myself.>"""
SYSTEM_PROMPT_KNOWLEDGE_BASE = """<system_instructions for ai goes here. please skip this. i will complete it myself.>"""
```
<|END|>
----
[r_types.py]
<|START|>
```
```
<|END|>
----
[requirements.txt]
<|START|>
```
openai
gradio
python-dotenv
requests
pytube
```
<|END|>
----
[.gitignore]
<|START|>
```
.env
.venv
__pycache__
*.pyc
.DS_Store
```
<|END|>
----
[.env]
<|START|>
```
AIML_API_KEY=...
GOOGLE_MAPS_API_KEY=...
```
<|END|> |