Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available:
5.9.1
metadata
title: Document AI Agent
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 4.36.1
app_file: app.py
pinned: false
license: apache-2.0
NVIDIA AI Document Chatbot
This project is a document-based chatbot application. It helps users ask questions about specific documents and receive accurate responses based on those documents.
Models and Components Used:
NVIDIAEmbeddings (NV-Embed-QA):
- This model extracts vector representations of texts to better understand documents.
- The NV-Embed-QA model is used to find relevant information in documents to answer questions.
ChatDocument (mistralai/mixtral-8x7b-instruct-v0.1):
- The Mistral-8x7B Instruct model is responsible for answering user questions about documents. It specializes in extracting information from documents and responding conversationally.
Application Workflow:
Loading Documents:
- Specific academic papers are loaded using
ArxivLoader
. These documents are split into text chunks and filtered based on predefined rules. - The documents are then added to a FAISS vector store, which allows for efficient and fast document chunk retrieval.
- Specific academic papers are loaded using
Chat and Document Querying:
- The user's questions are processed according to a predefined chat template. The response is generated based on both the conversation history and information retrieved from the documents.
- The
chat_gen
function takes the user's input and generates responses using NVIDIA models, pulling relevant information from the documents.
Remembering Document Content and Conversation History:
- Previous user messages and responses are stored in a conversation memory and used for answering future questions more effectively.
Conclusion:
This application leverages NVIDIA's powerful language models and embedding tools to generate intelligent, document-driven conversational responses.
An example chatbot using Gradio, huggingface_hub
, and the Hugging Face Inference API.