bstraehle commited on
Commit
332f98a
·
1 Parent(s): 04814f4

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -51,8 +51,8 @@ description = """<strong>Overview:</strong> The app demonstrates how to use a <s
51
  (RAG) on external data (YouTube videos in this case, but could be PDFs, URLs, databases, etc.)\n\n
52
  <strong>Instructions:</strong> Enter an OpenAI API key, YouTube URL, and prompt to perform semantic search, sentiment analysis, summarization,
53
  translation, etc. "Process Video" specifies whether or not to perform speech-to-text processing. To ask multiple questions related to the same video,
54
- typically set it to "True" the first run and then to "False". The example is a 3:12 min. video about GPT-4 and takes about 20 sec. to process.
55
- Try different prompts, for example "what is gpt-4, answer in german" or "write a poem about gpt-4".\n\n
56
  <strong>Technology:</strong> <a href='https://www.gradio.app/'>Gradio</a> UI using <a href='https://platform.openai.com/'>OpenAI</a> API
57
  via AI-first <a href='https://www.langchain.com/'>LangChain</a> toolkit with <a href='https://openai.com/research/whisper'>Whisper</a> (speech-to-text)
58
  and <a href='https://openai.com/research/gpt-4'>GPT-4</a> (LLM) foundation models as well as AI-native
 
51
  (RAG) on external data (YouTube videos in this case, but could be PDFs, URLs, databases, etc.)\n\n
52
  <strong>Instructions:</strong> Enter an OpenAI API key, YouTube URL, and prompt to perform semantic search, sentiment analysis, summarization,
53
  translation, etc. "Process Video" specifies whether or not to perform speech-to-text processing. To ask multiple questions related to the same video,
54
+ typically set it to "True" the first run and then to "False". The example is a 3:12 min. video about GPT-4 and takes less than 30 sec. to process.
55
+ Experiment with different prompts, for example "what is gpt-4, answer in german" or "write a haiku about gpt-4".\n\n
56
  <strong>Technology:</strong> <a href='https://www.gradio.app/'>Gradio</a> UI using <a href='https://platform.openai.com/'>OpenAI</a> API
57
  via AI-first <a href='https://www.langchain.com/'>LangChain</a> toolkit with <a href='https://openai.com/research/whisper'>Whisper</a> (speech-to-text)
58
  and <a href='https://openai.com/research/gpt-4'>GPT-4</a> (LLM) foundation models as well as AI-native