Spaces:
Build error
Build error
Update app.py
Browse files
app.py
CHANGED
@@ -51,7 +51,7 @@ def invoke(openai_api_key, use_rag, prompt):
|
|
51 |
return result["result"]
|
52 |
|
53 |
description = """<strong>Overview:</strong> The app demonstrates how to use a Large Language Model (LLM) with Retrieval Augmented Generation (RAG) on external data
|
54 |
-
(in this case YouTube
|
55 |
<a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
|
56 |
<strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases on a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video about GPT-4</a>
|
57 |
(semantic search, sentiment analysis, summarization, translation, etc.)
|
@@ -60,7 +60,7 @@ description = """<strong>Overview:</strong> The app demonstrates how to use a La
|
|
60 |
<li>Set "Use RAG" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
|
61 |
<li>Experiment with different prompts, for example "what is gpt-4, answer in german" or "write a haiku about gpt-4".</li>
|
62 |
</ul>
|
63 |
-
In a production system processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
|
64 |
<a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n
|
65 |
<strong>Technology:</strong> <a href='https://www.gradio.app/'>Gradio</a> UI using <a href='https://platform.openai.com/'>OpenAI</a> API via AI-first
|
66 |
<a href='https://www.langchain.com/'>LangChain</a> toolkit with <a href='https://openai.com/research/whisper'>Whisper</a> (speech-to-text) and
|
|
|
51 |
return result["result"]
|
52 |
|
53 |
description = """<strong>Overview:</strong> The app demonstrates how to use a Large Language Model (LLM) with Retrieval Augmented Generation (RAG) on external data
|
54 |
+
(in this case a YouTube video, but it could be PDFs, URLs, or other structured/unstructured private/public
|
55 |
<a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
|
56 |
<strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases on a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video about GPT-4</a>
|
57 |
(semantic search, sentiment analysis, summarization, translation, etc.)
|
|
|
60 |
<li>Set "Use RAG" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
|
61 |
<li>Experiment with different prompts, for example "what is gpt-4, answer in german" or "write a haiku about gpt-4".</li>
|
62 |
</ul>
|
63 |
+
In a production system, processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
|
64 |
<a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n
|
65 |
<strong>Technology:</strong> <a href='https://www.gradio.app/'>Gradio</a> UI using <a href='https://platform.openai.com/'>OpenAI</a> API via AI-first
|
66 |
<a href='https://www.langchain.com/'>LangChain</a> toolkit with <a href='https://openai.com/research/whisper'>Whisper</a> (speech-to-text) and
|