Update app.py
Browse files
app.py
CHANGED
@@ -67,11 +67,11 @@ description = """<strong>Overview:</strong> The app demonstrates how to use a La
|
|
67 |
(in this case a YouTube video, but it could be PDFs, URLs, or other structured/unstructured private/public
|
68 |
<a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
|
69 |
<strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases (semantic search, sentiment analysis, summarization, translation, etc.) on
|
70 |
-
a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video
|
71 |
<ul style="list-style-type:square;">
|
72 |
<li>Set "Retrieval Augmented Generation" to "False" and submit prompt "what is gpt-4". The LLM <strong>without</strong> RAG does not know the answer.</li>
|
73 |
<li>Set "Retrieval Augmented Generation" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
|
74 |
-
<li>Experiment with different prompts, for example "what is gpt-4, answer in german" or "write a poem about gpt-4".</li>
|
75 |
</ul>
|
76 |
In a production system, processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
|
77 |
<a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n
|
|
|
67 |
(in this case a YouTube video, but it could be PDFs, URLs, or other structured/unstructured private/public
|
68 |
<a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
|
69 |
<strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases (semantic search, sentiment analysis, summarization, translation, etc.) on
|
70 |
+
a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video of GPT-4</a>.
|
71 |
<ul style="list-style-type:square;">
|
72 |
<li>Set "Retrieval Augmented Generation" to "False" and submit prompt "what is gpt-4". The LLM <strong>without</strong> RAG does not know the answer.</li>
|
73 |
<li>Set "Retrieval Augmented Generation" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
|
74 |
+
<li>Experiment with different prompts, for example "what is gpt-4, answer in german" (arabic, chinese, hindi) or "write a poem about gpt-4".</li>
|
75 |
</ul>
|
76 |
In a production system, processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
|
77 |
<a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n
|