bstraehle commited on
Commit
58e1865
·
1 Parent(s): 4854d02

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +5 -5
app.py CHANGED
@@ -53,12 +53,12 @@ def invoke(openai_api_key, use_rag, prompt):
53
  description = """<strong>Overview:</strong> The app demonstrates how to use a Large Language Model (LLM) with Retrieval Augmented Generation (RAG) on external data
54
  (in this case YouTube videos, but it could be PDFs, URLs, or other structured/unstructured private/public
55
  <a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
56
- <strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases on a YouTube video (semantic search, sentiment analysis, summarization,
57
- translation, etc.) The example is a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video about GPT-4</a>.
58
  <ul style="list-style-type:square;">
59
- <li>Set "Process Video" to "False" and submit prompt "what is gpt-4". The LLM <strong>without</strong> RAG does not know the answer.</li>
60
- <li>Set "Process Video" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
61
- <li>Set "Process Video" to "False" and experiment with different prompts, for example "what is gpt-4, answer in german" or "write a haiku about gpt-4".</li>
62
  </ul>
63
  In a production system processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
64
  <a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n
 
53
  description = """<strong>Overview:</strong> The app demonstrates how to use a Large Language Model (LLM) with Retrieval Augmented Generation (RAG) on external data
54
  (in this case YouTube videos, but it could be PDFs, URLs, or other structured/unstructured private/public
55
  <a href='https://raw.githubusercontent.com/bstraehle/ai-ml-dl/c38b224c196fc984aab6b6cc6bdc666f8f4fbcff/langchain/document-loaders.png'>data sources</a>).\n\n
56
+ <strong>Instructions:</strong> Enter an OpenAI API key and perform LLM use cases on a <a href='https://www.youtube.com/watch?v=--khbXchTeE'>short video about GPT-4</a>
57
+ (semantic search, sentiment analysis, summarization, translation, etc.)
58
  <ul style="list-style-type:square;">
59
+ <li>Set "Use RAG" to "False" and submit prompt "what is gpt-4". The LLM <strong>without</strong> RAG does not know the answer.</li>
60
+ <li>Set "Use RAG" to "True" and submit prompt "what is gpt-4". The LLM <strong>with</strong> RAG knows the answer.</li>
61
+ <li>Experiment with different prompts, for example "what is gpt-4, answer in german" or "write a haiku about gpt-4".</li>
62
  </ul>
63
  In a production system processing external data would be done in a batch process. An idea for a production system would be to perform LLM use cases on the
64
  <a href='https://www.youtube.com/playlist?list=PL2yQDdvlhXf_hIzmfHCdbcXj2hS52oP9r'>AWS re:Invent playlist</a>.\n\n