YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A simple CHAT-WITH PDF/Docx on local computer (, e.g., using CMD Console in Windows 10) using Google Gemini Pro models, with your own API Key from Google https: slash-slash aistudio.google.com slash app slash apikey..

To exit the script, press Control-C: Ctrl +C simultaneously.

Based on Google API Key (not Vertex Google Cloud), and "import google.generativeai as genai".

A python script for Chat-with-PDF (or with .Docx, .Doc, .txt, .rtf, .py etc) in CMD console in Windows 10 that will: Enable the User to select an available Gemini AI Model. Present a number-indexed list of document filenames of the .doc and .docx files contained in the input_documents_folder. Ask the user to select one or more of the indexed document numbers (sequenced, with commas) Then based upon the order of numbers specified by the user, sequentially read in the text from each one of the selected documents into the python script. Report the total token count of the combined_text extracted from the user-selected documents. Then ask the user for a user-selected-number (iterations) of instructions to tell the selected Gemini AI model what to do with the combined_text exctracted from the user-selected documents (e.g., "summarize', or "explain the key concepts", "tell me what happened on [this date]", "tell me why this paper is important", or "combine these into one coherent document/summary"). Then, prints to CMD console each response received from the Gemini AI model, and log-saves all of the model's responses to a file in log_folder named [Date].log and also should log-save to a selected-document-name.txt file [not fully working yet]. Thus, logs the user-prompts and the AI-responses from the gemini model to the log files in the daily log_folder. [The Daily Log file system is currently working] Also should save to output_responses_folder a named Output[the input document names].rtf [not fully working yet]

The operational idea is that the first, second... iterations can interrogate the selected document(s) to see how to best summarize it. I.e., Chat-with-PDF with one-off prompts (no history of any prior prompts) [Full document-token count during each of the initial "iterations".]

The last iteration can import a summary of the selected document.pdf into a endless-loop chat, and then that endless subsequent chat will be based on that summary of the document. Can be reconfigured to send the entire document(s) combined_text into the chat following last iteration: combined_text = response.text # carries the last-prior response, or fake API response, forward (e.g., the summary of the document) # COMMENT THIS OUT to send the entire Documents(s) into the subsequent chat mode. Chat-with-PDF continues. Google API "chat" mode is NOT currently implemented in this script. Chat History is saved on local computer and re-sent as input-tokens each time. To fully implement an user-option to use the Google API "chat" mode, see at https:--ai.google.dev-tutorials-python_quickstart

license: apache-2.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .