id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
5885a6f345ff-3 | Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}),
Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}),
Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}),
Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})]
Extracting metadata#
Generally, we want to include metadata available in the JSON file into the documents that we create from the content.
The following demonstrates how metadata can be extracted using the JSONLoader.
There are some key changes to be noted. In the previous example where we didn’t collect the metadata, we managed to directly specify in the schema where the value for the page_content can be extracted from.
.messages[].content
In the current example, we have to tell the loader to iterate over the records in the messages field. The jq_schema then has to be:
.messages[]
This allows us to pass the records (dict) into the metadata_func that has to be implemented. The metadata_func is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final Document object. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-4 | Additionally, we now have to explicitly specify in the loader, via the content_key argument, the key from the record where the value for the page_content needs to be extracted from.
# Define the metadata extraction function.
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["sender_name"] = record.get("sender_name")
metadata["timestamp_ms"] = record.get("timestamp_ms")
return metadata
loader = JSONLoader(
file_path='./example_data/facebook_chat.json',
jq_schema='.messages[]',
content_key="content",
metadata_func=metadata_func
)
data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),
Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),
Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-5 | Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),
Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}),
Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),
Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),
Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),
Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-6 | Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),
Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
Now, you will see that the documents contain the metadata associated with the content we extracted.
The metadata_func#
As shown above, the metadata_func accepts the default metadata generated by the JSONLoader. This allows full control to the user with respect to how the metadata is formatted.
For example, the default metadata contains the source and the seq_num keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the metadata_func to rename the default keys and use the ones from the JSON data.
The example below shows how we can modify the source to only contain information of the file source relative to the langchain directory.
# Define the metadata extraction function.
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["sender_name"] = record.get("sender_name")
metadata["timestamp_ms"] = record.get("timestamp_ms")
if "source" in metadata:
source = metadata["source"].split("/")
source = source[source.index("langchain"):]
metadata["source"] = "/".join(source)
return metadata
loader = JSONLoader( | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-7 | return metadata
loader = JSONLoader(
file_path='./example_data/facebook_chat.json',
jq_schema='.messages[]',
content_key="content",
metadata_func=metadata_func
)
data = loader.load()
pprint(data)
[Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}),
Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}),
Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}),
Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}),
Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-8 | Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}),
Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}),
Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}),
Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}),
Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}),
Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})]
Common JSON structures with jq schema# | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
5885a6f345ff-9 | Common JSON structures with jq schema#
The list below provides a reference to the possible jq_schema the user can use to extract content from the JSON data depending on the structure.
JSON -> [{"text": ...}, {"text": ...}, {"text": ...}]
jq_schema -> ".[].text"
JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]}
jq_schema -> ".key[].text"
JSON -> ["...", "...", "..."]
jq_schema -> ".[]"
previous
Jupyter Notebook
next
Markdown
Contents
Using JSONLoader
Extracting metadata
The metadata_func
Common JSON structures with jq schema
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/json.html |
f5fd3cded7e4-0 | .ipynb
.pdf
Azure Blob Storage File
Azure Blob Storage File#
Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.
This covers how to load document objects from a Azure Files.
#!pip install azure-storage-blob
from langchain.document_loaders import AzureBlobStorageFileLoader
loader = AzureBlobStorageFileLoader(conn_str='<connection string>', container='<container name>', blob_name='<blob name>')
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)]
previous
Azure Blob Storage Container
next
Blackboard
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_file.html |
b098b06a5e8f-0 | .ipynb
.pdf
Git
Contents
Load existing repository from disk
Clone repository from url
Filtering files to load
Git#
Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.
This notebook shows how to load text files from Git repository.
Load existing repository from disk#
!pip install GitPython
from git import Repo
repo = Repo.clone_from(
"https://github.com/hwchase17/langchain", to_path="./example_data/test_repo1"
)
branch = repo.head.reference
from langchain.document_loaders import GitLoader
loader = GitLoader(repo_path="./example_data/test_repo1/", branch=branch)
data = loader.load()
len(data)
print(data[0])
page_content='.venv\n.github\n.git\n.mypy_cache\n.pytest_cache\nDockerfile' metadata={'file_path': '.dockerignore', 'file_name': '.dockerignore', 'file_type': ''}
Clone repository from url#
from langchain.document_loaders import GitLoader
loader = GitLoader(
clone_url="https://github.com/hwchase17/langchain",
repo_path="./example_data/test_repo2/",
branch="master",
)
data = loader.load()
len(data)
1074
Filtering files to load#
from langchain.document_loaders import GitLoader
# eg. loading only python files
loader = GitLoader(repo_path="./example_data/test_repo1/", file_filter=lambda file_path: file_path.endswith(".py"))
previous
GitBook
next
Google BigQuery
Contents
Load existing repository from disk
Clone repository from url
Filtering files to load
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html |
b098b06a5e8f-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/git.html |
75faf0cb61ca-0 | .ipynb
.pdf
Modern Treasury
Modern Treasury#
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Connect to banks and payment systems
Track transactions and balances in real-time
Automate payment operations for scale
This notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import ModernTreasuryLoader
from langchain.indexes import VectorstoreIndexCreator
The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
payment_orders Documentation
expected_payments Documentation
returns Documentation
incoming_payment_details Documentation
counterparties Documentation
internal_accounts Documentation
external_accounts Documentation
transactions Documentation
ledgers Documentation
ledger_accounts Documentation
ledger_transactions Documentation
events Documentation
invoices Documentation
modern_treasury_loader = ModernTreasuryLoader("payment_orders")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])
modern_treasury_doc_retriever = index.vectorstore.as_retriever()
previous
Microsoft OneDrive
next
Notion DB 2/2
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/modern_treasury.html |
d6bc7554fe6f-0 | .ipynb
.pdf
Weather
Weather#
OpenWeatherMap is an open source weather service provider
This loader fetches the weather data from the OpenWeatherMap’s OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for.
from langchain.document_loaders import WeatherDataLoader
#!pip install pyowm
# Set API key either by passing it in to constructor directly
# or by setting the environment variable "OPENWEATHERMAP_API_KEY".
from getpass import getpass
OPENWEATHERMAP_API_KEY = getpass()
loader = WeatherDataLoader.from_params(['chennai','vellore'], openweathermap_api_key=OPENWEATHERMAP_API_KEY)
documents = loader.load()
documents
previous
WebBaseLoader
next
WhatsApp Chat
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/weather.html |
95c8cabee5ab-0 | .ipynb
.pdf
Images
Contents
Using Unstructured
Retain Elements
Images#
This covers how to load images such as JPG or PNG into a document format that we can use downstream.
Using Unstructured#
#!pip install pdfminer
from langchain.document_loaders.image import UnstructuredImageLoader
loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg")
data = loader.load()
data[0] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html |
95c8cabee5ab-1 | Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html |
95c8cabee5ab-2 | streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0) | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html |
95c8cabee5ab-3 | Retain Elements#
Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements")
data = loader.load()
data[0]
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0)
previous
HTML
next
Jupyter Notebook
Contents
Using Unstructured
Retain Elements
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html |
bbff4fb8a8e7-0 | .ipynb
.pdf
Docugami
Contents
Prerequisites
Load Documents
Basic Use: Docugami Loader for Document QA
Using Docugami to Add Metadata to Chunks for High Accuracy Document QA
Docugami#
This notebook covers how to load documents from Docugami. See here for more details, and the advantages of using this system over alternative data loaders.
Prerequisites#
Follow the Quick Start section in this document
Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable
Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api
# You need the lxml package to use the DocugamiLoader
!poetry run pip -q install lxml
import os
from langchain.document_loaders import DocugamiLoader
Load Documents#
If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter.
DOCUGAMI_API_KEY=os.environ.get('DOCUGAMI_API_KEY')
# To load all docs in the given docset ID, just don't provide document_ids
loader = DocugamiLoader(docset_id="ecxqpipcoe2p", document_ids=["43rj0ds7s0ur"])
docs = loader.load()
docs | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-1 | docs = loader.load()
docs
[Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this “ Agreement ”) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}),
Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the “Purpose”). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-2 | Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}),
Document(page_content='1. Confidential Information . For purposes of this Agreement , “ Confidential Information ” means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked “confidential” or “proprietary” at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as “confidential” or “proprietary” at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-3 | Document(page_content="2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other party’s Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other party’s Confidential Information as those set forth in this Agreement .", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-4 | Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}),
Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-5 | Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),
Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-6 | Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other party’s Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-7 | Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing party’s request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing party’s Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing party’s Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-8 | Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}),
Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY “AS IS ”.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-9 | Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}),
Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing party’s Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-10 | Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-11 | Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other party’s prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-12 | Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}),
Document(page_content='DOCUGAMI INC . : \n\n Caleb Divine : \n\n Signature: Signature: Name: \n\n Jean Paoli Name: Title: \n\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]
The metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:
id and name: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.
xpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.
structure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-13 | tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarks
Basic Use: Docugami Loader for Document QA#
You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques.
!poetry run pip -q install openai tiktoken chromadb
from langchain.schema import Document
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
# For this example, we already have a processed docset for a set of lease documents
loader = DocugamiLoader(docset_id="wh2kned25uqm")
documents = loader.load()
The documents returned by the loader are already split, so we don’t need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.
We will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way.
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=documents, embedding=embedding)
retriever = vectordb.as_retriever()
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True
)
Using embedded DuckDB without persistence: data will be transient | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-14 | )
Using embedded DuckDB without persistence: data will be transient
# Try out the retriever with an example query
qa_chain("What can tenants do with signage on their properties?")
{'query': 'What can tenants do with signage on their properties?',
'result': ' Tenants may place signs (digital or otherwise) or other form of identification on the premises after receiving written permission from the landlord which shall not be unreasonably withheld. The tenant is responsible for any damage caused to the premises and must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also remove and clean any window or glass identification promptly upon vacating the premises.', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-15 | 'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-16 | Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \n\n ARTICLE VII UTILITIES 7.01', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOFFICELEASEAGREEMENTThis/docset:ArticleIBasic/docset:ArticleIiiUseAndCareOf/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:NoOtherPurposes/docset:TenantsResponsibility/dg:chunk', 'id': 'g2fvhekmltza', 'name': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'Landlord': 'GLORY ROAD LLC', 'Tenant': 'Truetone Lane LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-17 | Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a "FOR RENT " or "FOR LEASE" sign (not exceeding 8.5 ” x 11 ”) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord', 'id': 'omvs4mysdk6b', 'name': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'Landlord': 'BIRCH STREET , LLC', 'Tenant': 'Trutone Lane LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-18 | Document(page_content="24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS', 'id': 'qkn9cyqsiuch', 'name': 'Shorebucks LLC_AZ.pdf', 'structure': 'div', 'tag': 'SIGNS', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC'})]}
Using Docugami to Add Metadata to Chunks for High Accuracy Document QA# | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-19 | Using Docugami to Add Metadata to Chunks for High Accuracy Document QA#
One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents.
For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI’s powerful LLM is unable to answer correctly.
chain_response = qa_chain("What is rentable area for the property owned by DHA Group?")
chain_response["result"] # the correct answer should be 13,500
' 9,753 square feet'
At first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the Menlo Group landlord. That landlord happens to be mentioned on the first page of the file Shorebucks LLC_NJ.pdf file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (13,500), other source chunks from different docs are included, and the answer is therefore incorrect.
chain_response["source_documents"] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-20 | chain_response["source_documents"]
[Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-21 | Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-22 | Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-23 | Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:PerryBlair/docset:PerryBlair/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'dsyfhh4vpeyf', 'name': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'Perry & Blair LLC', 'Tenant': 'Shorebucks LLC'})]
Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been using Docugami. More technical approaches will be added later.
Specifically, let’s look at the additional metadata that is returned on the documents returned by docugami, in the form of some simple key/value pairs on all the text chunks:
loader = DocugamiLoader(docset_id="wh2kned25uqm")
documents = loader.load()
documents[0].metadata
{'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOfficeLeaseAgreement', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-24 | 'id': 'v1bvgaozfkak',
'name': 'TruTone Lane 2.docx',
'structure': 'p',
'tag': 'ThisOfficeLeaseAgreement',
'Landlord': 'BUBBA CENTER PARTNERSHIP',
'Tenant': 'Truetone Lane LLC'}
We can use a self-querying retriever to improve our query accuracy, using this additional metadata:
from langchain.chains.query_constructor.schema import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
EXCLUDE_KEYS = ["id", "xpath", "structure"]
metadata_field_info = [
AttributeInfo(
name=key,
description=f"The {key} for this chunk",
type="string",
)
for key in documents[0].metadata
if key.lower() not in EXCLUDE_KEYS
]
document_content_description = "Contents of this chunk"
llm = OpenAI(temperature=0)
vectordb = Chroma.from_documents(documents=documents, embedding=embedding)
retriever = SelfQueryRetriever.from_llm(
llm, vectordb, document_content_description, metadata_field_info, verbose=True
)
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True
)
Using embedded DuckDB without persistence: data will be transient
Let’s run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this infromation is physically very far away from the source chunk used to generate the answer.
qa_chain("What is rentable area for the property owned by DHA Group?") | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-25 | qa_chain("What is rentable area for the property owned by DHA Group?")
query='rentable area' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Landlord', value='DHA Group')
{'query': 'What is rentable area for the property owned by DHA Group?',
'result': ' 13,500 square feet.',
'source_documents': [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-26 | Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-27 | Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-28 | Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]}
This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer.
previous
Discord
next
DuckDB
Contents
Prerequisites
Load Documents
Basic Use: Docugami Loader for Document QA
Using Docugami to Add Metadata to Chunks for High Accuracy Document QA
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
bbff4fb8a8e7-29 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html |
b543a700d18a-0 | .ipynb
.pdf
Apify Dataset
Contents
Prerequisites
An example with question answering
Apify Dataset#
Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.
This notebook shows how to load Apify datasets to LangChain.
Prerequisites#
You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs.
#!pip install apify-client
First, import ApifyDatasetLoader into your source code:
from langchain.document_loaders import ApifyDatasetLoader
from langchain.document_loaders.base import Document
Then provide a function that maps Apify dataset record fields to LangChain Document format.
For example, if your dataset items are structured like this:
{
"url": "https://apify.com",
"text": "Apify is the best web scraping and automation platform."
}
The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering).
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda dataset_item: Document(
page_content=dataset_item["text"], metadata={"source": dataset_item["url"]}
),
)
data = loader.load()
An example with question answering#
In this example, we use data from a dataset to answer a question. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html |
b543a700d18a-1 | In this example, we use data from a dataset to answer a question.
from langchain.docstore.document import Document
from langchain.document_loaders import ApifyDatasetLoader
from langchain.indexes import VectorstoreIndexCreator
loader = ApifyDatasetLoader(
dataset_id="your-dataset-id",
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is Apify?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform.
https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples
previous
Airbyte JSON
next
AWS S3 Directory
Contents
Prerequisites
An example with question answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html |
168153a99e95-0 | .ipynb
.pdf
GitBook
Contents
Load from single GitBook page
Load from all paths in a given GitBook
GitBook#
GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.
This notebook shows how to pull page data from any GitBook.
from langchain.document_loaders import GitbookLoader
Load from single GitBook page#
loader = GitbookLoader("https://docs.gitbook.com")
page_data = loader.load()
page_data
[Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)]
Load from all paths in a given GitBook#
For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
168153a99e95-1 | loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True)
all_pages_data = loader.load()
Fetching text from https://docs.gitbook.com/
Fetching text from https://docs.gitbook.com/getting-started/overview
Fetching text from https://docs.gitbook.com/getting-started/import
Fetching text from https://docs.gitbook.com/getting-started/git-sync
Fetching text from https://docs.gitbook.com/getting-started/content-structure
Fetching text from https://docs.gitbook.com/getting-started/collaboration
Fetching text from https://docs.gitbook.com/getting-started/publishing
Fetching text from https://docs.gitbook.com/tour/quick-find
Fetching text from https://docs.gitbook.com/tour/editor
Fetching text from https://docs.gitbook.com/tour/customization
Fetching text from https://docs.gitbook.com/tour/member-management
Fetching text from https://docs.gitbook.com/tour/pdf-export
Fetching text from https://docs.gitbook.com/tour/activity-history
Fetching text from https://docs.gitbook.com/tour/insights
Fetching text from https://docs.gitbook.com/tour/notifications
Fetching text from https://docs.gitbook.com/tour/internationalization
Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts
Fetching text from https://docs.gitbook.com/tour/seo
Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain
Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security
Fetching text from https://docs.gitbook.com/advanced-guides/integrations
Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings
Fetching text from https://docs.gitbook.com/billing-and-admin/plans
Fetching text from https://docs.gitbook.com/troubleshooting/faqs | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
168153a99e95-2 | Fetching text from https://docs.gitbook.com/troubleshooting/faqs
Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh
Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs
Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues
Fetching text from https://docs.gitbook.com/troubleshooting/support
print(f"fetched {len(all_pages_data)} documents.")
# show second document
all_pages_data[2]
fetched 28 documents. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
168153a99e95-3 | Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
168153a99e95-4 | in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0) | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
168153a99e95-5 | previous
Figma
next
Git
Contents
Load from single GitBook page
Load from all paths in a given GitBook
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html |
a85f2844b04c-0 | .ipynb
.pdf
Figma
Figma#
Figma is a collaborative web application for interface design.
This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation.
import os
from langchain.document_loaders.figma import FigmaFileLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import ConversationChain, LLMChain
from langchain.memory import ConversationBufferWindowMemory
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
The Figma API Requires an access token, node_ids, and a file key.
The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename
Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param.
Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens
figma_loader = FigmaFileLoader(
os.environ.get('ACCESS_TOKEN'),
os.environ.get('NODE_IDS'),
os.environ.get('FILE_KEY')
)
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([figma_loader])
figma_doc_retriever = index.vectorstore.as_retriever()
def generate_code(human_input):
# I have no idea if the Jon Carmack thing makes for better code. YMMV. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
a85f2844b04c-1 | # See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info
system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request.
Everything must be inline in one file and your response must be directly renderable by the browser.
Figma file nodes and metadata: {context}"""
human_prompt_template = "Code the {text}. Ensure it's mobile responsive"
system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template)
human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template)
# delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results
gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4')
# Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs
relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input)
conversation = [system_message_prompt, human_message_prompt]
chat_prompt = ChatPromptTemplate.from_messages(conversation)
response = gpt_4(chat_prompt.format_prompt(
context=relevant_nodes,
text=human_input).to_messages())
return response
response = generate_code("page top header")
Returns the following in response.content: | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
a85f2844b04c-2 | <!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
a85f2844b04c-3 | font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
a85f2844b04c-4 | Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html> | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
a85f2844b04c-5 | previous
DuckDB
next
GitBook
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html |
bfdfa725dd36-0 | .ipynb
.pdf
Telegram
Telegram#
Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
This notebook covers how to load data from Telegram into a format that can be ingested into LangChain.
from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoader
loader = TelegramChatFileLoader("example_data/telegram.json")
loader.load()
[Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})]
TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account.
You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps
chat_entity – recommended to be the entity of a channel.
loader = TelegramChatApiLoader(
chat_entity="<CHAT_URL>", # recommended to use Entity here
api_hash="<API HASH >",
api_id="<API_ID>",
user_name ="", # needed only for caching the session.
)
loader.load()
previous
Subtitle
next
TOML
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/telegram.html |
7b682f7315bc-0 | .ipynb
.pdf
WhatsApp Chat
WhatsApp Chat#
WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.
This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain.
from langchain.document_loaders import WhatsAppChatLoader
loader = WhatsAppChatLoader("example_data/whatsapp_chat.txt")
loader.load()
previous
Weather
next
Arxiv
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/whatsapp_chat.html |
3a98cf4dd496-0 | .ipynb
.pdf
Google Cloud Storage File
Google Cloud Storage File#
Google Cloud Storage is a managed service for storing unstructured data.
This covers how to load document objects from an Google Cloud Storage (GCS) file object (blob).
# !pip install google-cloud-storage
from langchain.document_loaders import GCSFileLoader
loader = GCSFileLoader(project_name="aist", bucket="testing-hwc", blob="fake.docx")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmp3srlf8n8/fake.docx'}, lookup_index=0)]
previous
Google Cloud Storage Directory
next
Google Drive
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_file.html |
d70a57c7829f-0 | .ipynb
.pdf
Obsidian
Obsidian#
Obsidian is a powerful and extensible knowledge base
that works on top of your local folder of plain text files.
This notebook covers how to load documents from an Obsidian database.
Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.
Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document’s metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)
from langchain.document_loaders import ObsidianLoader
loader = ObsidianLoader("<path-to-obsidian>")
docs = loader.load()
previous
Notion DB 1/2
next
Psychic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/obsidian.html |
75e034c12e61-0 | .ipynb
.pdf
ChatGPT Data
ChatGPT Data#
ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI.
This notebook covers how to load conversations.json from your ChatGPT data export folder.
You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export.
from langchain.document_loaders.chatgpt import ChatGPTLoader
loader = ChatGPTLoader(log_file='./example_data/fake_conversations.json', num_logs=1)
loader.load()
[Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})]
previous
Blockchain
next
Confluence
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/chatgpt_loader.html |
9c18aadc6105-0 | .ipynb
.pdf
Stripe
Stripe#
Stripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
This notebook covers how to load data from the Stripe REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import StripeLoader
from langchain.indexes import VectorstoreIndexCreator
The Stripe API requires an access token, which can be found inside of the Stripe dashboard.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
balance_transations Documentation
charges Documentation
customers Documentation
events Documentation
refunds Documentation
disputes Documentation
stripe_loader = StripeLoader("charges")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([stripe_loader])
stripe_doc_retriever = index.vectorstore.as_retriever()
previous
Spreedly
next
2Markdown
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/stripe.html |
dc88c8279960-0 | .ipynb
.pdf
Iugu
Iugu#
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import IuguLoader
from langchain.indexes import VectorstoreIndexCreator
The Iugu API requires an access token, which can be found inside of the Iugu dashboard.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
Documentation Documentation
iugu_loader = IuguLoader("charges")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([iugu_loader])
iugu_doc_retriever = index.vectorstore.as_retriever()
previous
Image captions
next
Joplin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/iugu.html |
985f876240e7-0 | .ipynb
.pdf
HTML
Contents
Loading HTML with BeautifulSoup4
HTML#
The HyperText Markup Language or HTML is the standard markup language for documents designed to be displayed in a web browser.
This covers how to load HTML documents into a document format that we can use downstream.
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader("example_data/fake-content.html")
data = loader.load()
data
[Document(page_content='My First Heading\n\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)]
Loading HTML with BeautifulSoup4#
We can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the HTML into page_content, and the page title as title into metadata.
from langchain.document_loaders import BSHTMLLoader
loader = BSHTMLLoader("example_data/fake-content.html")
data = loader.load()
data
[Document(page_content='\n\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'})]
previous
File Directory
next
Images
Contents
Loading HTML with BeautifulSoup4
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/html.html |
123acf3d4811-0 | .ipynb
.pdf
Discord
Discord#
Discord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called “servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.
Follow these steps to download your Discord data:
Go to your User Settings
Then go to Privacy and Safety
Head over to the Request all of my Data and click on Request Data button
It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered with Discord. That email will have a download button using which you would be able to download your personal Discord data.
import pandas as pd
import os
path = input("Please enter the path to the contents of the Discord \"messages\" folder: ")
li = []
for f in os.listdir(path):
expected_csv_path = os.path.join(path, f, 'messages.csv')
csv_exists = os.path.isfile(expected_csv_path)
if csv_exists:
df = pd.read_csv(expected_csv_path, index_col=None, header=0)
li.append(df)
df = pd.concat(li, axis=0, ignore_index=True, sort=False)
from langchain.document_loaders.discord import DiscordChatLoader
loader = DiscordChatLoader(df, user_id_col="ID")
print(loader.load())
previous
Diffbot
next
Docugami
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/discord_loader.html |
a55ccde3775c-0 | .ipynb
.pdf
Roam
Contents
🧑 Instructions for ingesting your own dataset
Roam#
ROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.
This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.
🧑 Instructions for ingesting your own dataset#
Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.
When exporting, make sure to select the Markdown & CSV format option.
This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.
Run the following command to unzip the zip file (replace the Export... with your own file name as needed).
unzip Roam-Export-1675782732639.zip -d Roam_DB
from langchain.document_loaders import RoamLoader
loader = RoamLoader("Roam_DB")
docs = loader.load()
previous
Reddit
next
Slack
Contents
🧑 Instructions for ingesting your own dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/roam.html |
21680912118c-0 | .ipynb
.pdf
MediaWikiDump
MediaWikiDump#
MediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.
This covers how to load a MediaWiki XML dump file into a document format that we can use downstream.
It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode.
Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki.
#mediawiki-utilities supports XML schema 0.11 in unmerged branches
!pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11
#mediawiki-utilities mwxml has a bug, fix PR pending
!pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11
!pip install -qU mwparserfromhell
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader("example_data/testmw_pages_current.xml", encoding="utf8")
documents = loader.load()
print (f'You have {len(documents)} document(s) in your data ')
You have 177 document(s) in your data
documents[:5]
[Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html |
21680912118c-1 | Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html |
21680912118c-2 | Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html |
21680912118c-3 | template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html |
21680912118c-4 | Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}),
Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})]
previous
IMSDb
next
Wikipedia
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html |
af32767941cb-0 | .ipynb
.pdf
Notion DB 2/2
Contents
Requirements
Setup
1. Create a Notion Table Database
2. Create a Notion Integration
3. Connect the Integration to the Database
4. Get the Database ID
Usage
Notion DB 2/2#
Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.
NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects.
Requirements#
A Notion Database
Notion Integration Token
Setup#
1. Create a Notion Table Database#
Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns:
Title: set Title as the default property.
Categories: A Multi-select property to store categories associated with the page.
Keywords: A Multi-select property to store keywords associated with the page.
Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages.
2. Create a Notion Integration#
To create a Notion Integration, follow these steps:
Visit the Notion Developers page and log in with your Notion account.
Click on the “+ New integration” button.
Give your integration a name and choose the workspace where your database is located.
Select the require capabilities, this extension only need the Read content capability
Click the “Submit” button to create the integration. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html |
af32767941cb-1 | Click the “Submit” button to create the integration.
Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader.
3. Connect the Integration to the Database#
To connect your integration to the database, follow these steps:
Open your database in Notion.
Click on the three-dot menu icon in the top right corner of the database view.
Click on the “+ New integration” button.
Find your integration, you may need to start typing its name in the search box.
Click on the “Connect” button to connect the integration to the database.
4. Get the Database ID#
To get the database ID, follow these steps:
Open your database in Notion.
Click on the three-dot menu icon in the top right corner of the database view.
Select “Copy link” from the menu to copy the database URL to your clipboard.
The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456.
With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database.
Usage#
NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows:
from getpass import getpass
NOTION_TOKEN = getpass()
DATABASE_ID = getpass()
········
········
from langchain.document_loaders import NotionDBLoader | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html |
af32767941cb-2 | ········
from langchain.document_loaders import NotionDBLoader
loader = NotionDBLoader(
integration_token=NOTION_TOKEN,
database_id=DATABASE_ID,
request_timeout_sec=30 # optional, defaults to 10
)
docs = loader.load()
print(docs)
previous
Modern Treasury
next
Notion DB 1/2
Contents
Requirements
Setup
1. Create a Notion Table Database
2. Create a Notion Integration
3. Connect the Integration to the Database
4. Get the Database ID
Usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html |
13911f35b615-0 | .ipynb
.pdf
CoNLL-U
CoNLL-U#
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.
Blank lines marking sentence boundaries.
Comment lines starting with hash (#).
This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.
from langchain.document_loaders import CoNLLULoader
loader = CoNLLULoader("example_data/conllu.conllu")
document = loader.load()
document
[Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]
previous
Document Loaders
next
Copy Paste
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/conll-u.html |
d317aa3c0383-0 | .ipynb
.pdf
DuckDB
Contents
Specifying Which Columns are Content vs Metadata
Adding Source to Metadata
DuckDB#
DuckDB is an in-process SQL OLAP database management system.
Load a DuckDB query with one document per row.
#!pip install duckdb
from langchain.document_loaders import DuckDBLoader
%%file example.csv
Team,Payroll
Nationals,81.34
Reds,82.20
Writing example.csv
loader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")
data = loader.load()
print(data)
[Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})]
Specifying Which Columns are Content vs Metadata#
loader = DuckDBLoader(
"SELECT * FROM read_csv_auto('example.csv')",
page_content_columns=["Team"],
metadata_columns=["Payroll"]
)
data = loader.load()
print(data)
[Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]
Adding Source to Metadata#
loader = DuckDBLoader(
"SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')",
metadata_columns=["source"]
)
data = loader.load()
print(data)
[Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})]
previous
Docugami
next
Figma
Contents | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html |
d317aa3c0383-1 | previous
Docugami
next
Figma
Contents
Specifying Which Columns are Content vs Metadata
Adding Source to Metadata
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html |
115da01c9fd8-0 | .ipynb
.pdf
Wikipedia
Contents
Installation
Examples
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.
Installation#
First, you need to install wikipedia python package.
#!pip install wikipedia
Examples#
WikipediaLoader has these arguments:
query: free text which used to find documents in Wikipedia
optional lang: default=”en”. Use it to search in a specific language part of Wikipedia
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.
from langchain.document_loaders import WikipediaLoader
docs = WikipediaLoader(query='HUNTER X HUNTER', load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
docs[0].page_content[:400] # a content of the Document
previous
MediaWikiDump
next
YouTube transcripts
Contents
Installation
Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/wikipedia.html |
4dfbe2bed833-0 | .ipynb
.pdf
Google Cloud Storage Directory
Contents
Specifying a prefix
Google Cloud Storage Directory#
Google Cloud Storage is a managed service for storing unstructured data.
This covers how to load document objects from an Google Cloud Storage (GCS) directory (bucket).
# !pip install google-cloud-storage
from langchain.document_loaders import GCSDirectoryLoader
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html |
4dfbe2bed833-1 | warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpz37njh7u/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = GCSDirectoryLoader(project_name="aist", bucket="testing-hwc", prefix="fake")
loader.load()
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
/Users/harrisonchase/workplace/langchain/.venv/lib/python3.10/site-packages/google/auth/_default.py:83: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK without a quota project. You might receive a "quota exceeded" or "API not enabled" error. We recommend you rerun `gcloud auth application-default login` and make sure a quota project is added. Or you can use service accounts instead. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html |
4dfbe2bed833-2 | warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpylg6291i/fake.docx'}, lookup_index=0)]
previous
Google BigQuery
next
Google Cloud Storage File
Contents
Specifying a prefix
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_cloud_storage_directory.html |
0ff3362f9e7d-0 | .ipynb
.pdf
Hacker News
Hacker News#
Hacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as “anything that gratifies one’s intellectual curiosity.”
This notebook covers how to pull page data and comments from Hacker News
from langchain.document_loaders import HNLoader
loader = HNLoader("https://news.ycombinator.com/item?id=34817881")
data = loader.load()
data[0].page_content[:300]
"delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a"
data[0].metadata
{'source': 'https://news.ycombinator.com/item?id=34817881',
'title': 'What Lights the Universe’s Standard Candles?'}
previous
Gutenberg
next
HuggingFace dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hacker_news.html |
5580c2e8a580-0 | .ipynb
.pdf
Azure Blob Storage Container
Contents
Specifying a prefix
Azure Blob Storage Container#
Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Azure Blob Storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
This notebook covers how to load document objects from a container on Azure Blob Storage.
#!pip install azure-storage-blob
from langchain.document_loaders import AzureBlobStorageContainerLoader
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
previous
AWS S3 File
next
Azure Blob Storage File
Contents | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html |
5580c2e8a580-1 | previous
AWS S3 File
next
Azure Blob Storage File
Contents
Specifying a prefix
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html |
bb21f4c303a0-0 | .ipynb
.pdf
Diffbot
Diffbot#
Unlike traditional web scraping tools, Diffbot doesn’t require any rules to read the content on a page.
It starts with computer vision, which classifies a page into one of 20 possible types. Content is then interpreted by a machine learning model trained to identify the key attributes on a page based on its type.
The result is a website transformed into clean structured data (like JSON or CSV), ready for your application.
This covers how to extract HTML documents from a list of URLs using the Diffbot extract API, into a document format that we can use downstream.
urls = [
"https://python.langchain.com/en/latest/index.html",
]
The Diffbot Extract API Requires an API token. Once you have it, you can extract the data from the previous URLs
import os
from langchain.document_loaders import DiffbotLoader
loader = DiffbotLoader(urls=urls, api_token=os.environ.get("DIFFBOT_API_TOKEN"))
With the .load() method, you can see the documents loaded
loader.load() | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-1 | [Document(page_content='LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:\nBe data-aware: connect a language model to other sources of data\nBe agentic: allow a language model to interact with its environment\nThe LangChain framework is designed with the above principles in mind.\nThis is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.\nGetting Started\nCheckout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.\nGetting Started Documentation\nModules\nThere are several main modules that LangChain provides support for. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. These modules are, in increasing order of complexity:\nModels: The various model types and model integrations LangChain supports.\nPrompts: This includes prompt management, prompt optimization, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-2 | This includes prompt management, prompt optimization, and prompt serialization.\nMemory: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\nIndexes: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.\nChains: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\nAgents: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\nUse Cases\nThe above modules can be used in a variety of ways. LangChain also provides guidance and assistance in | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-3 | ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.\nPersonal Assistants: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.\nQuestion Answering: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.\nChatbots: Since language models are good at producing text, that makes them ideal for creating chatbots.\nQuerying Tabular Data: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.\nInteracting with APIs: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.\nExtraction: Extract structured information from text.\nSummarization: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.\nEvaluation: Generative models | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-4 | type of Data Augmented Generation.\nEvaluation: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\nReference Docs\nAll of LangChain’s reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.\nReference Documentation\nLangChain Ecosystem\nGuides for how other companies/products can be used with LangChain\nLangChain Ecosystem\nAdditional Resources\nAdditional collection of resources we think may be useful as you develop your application!\nLangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.\nGlossary: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!\nGallery: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.\nDeployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.\nTracing: A | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-5 | template repositories for deploying LangChain apps.\nTracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.\nModel Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.\nDiscord: Join us on our Discord to discuss all things LangChain!\nProduction Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.', metadata={'source': 'https://python.langchain.com/en/latest/index.html'})] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
bb21f4c303a0-6 | previous
Confluence
next
Discord
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/diffbot.html |
9e48bd7431b8-0 | .ipynb
.pdf
Facebook Chat
Facebook Chat#
Messenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.
This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain.
#pip install pandas
from langchain.document_loaders import FacebookChatLoader
loader = FacebookChatLoader("example_data/facebook_chat.json")
loader.load() | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html |
9e48bd7431b8-1 | loader = FacebookChatLoader("example_data/facebook_chat.json")
loader.load()
[Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})]
previous
EverNote
next
File Directory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on May 28, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html |
cbc34c3b5736-0 | .ipynb
.pdf
2Markdown
2Markdown#
2markdown service transforms website content into structured markdown files.
# You will need to get your own API key. See https://2markdown.com/login
api_key = ""
from langchain.document_loaders import ToMarkdownLoader
loader = ToMarkdownLoader.from_api_key(url="https://python.langchain.com/en/latest/", api_key=api_key)
docs = loader.load()
print(docs[0].page_content)
## Contents
- [Getting Started](#getting-started)
- [Modules](#modules)
- [Use Cases](#use-cases)
- [Reference Docs](#reference-docs)
- [LangChain Ecosystem](#langchain-ecosystem)
- [Additional Resources](#additional-resources)
## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline")
**LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:
1. _Data-aware_: connect a language model to other sources of data
2. _Agentic_: allow a language model to interact with its environment
The LangChain framework is designed around these principles.
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/).
## Getting Started [\#](\#getting-started "Permalink to this headline")
How to get started using LangChain to create an Language Model application.
- [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html)
Concepts and terminology. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html |
cbc34c3b5736-1 | Concepts and terminology.
- [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html)
Tutorials created by community experts and presented on YouTube.
- [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html)
## Modules [\#](\#modules "Permalink to this headline")
These modules are the core abstractions which we view as the building blocks of any LLM-powered application.
For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use.
The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.
The modules are (from least to most complex):
- [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations.
- [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization.
- [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent.
- [Indexes](https://python.langchain.com/en/latest/modules/indexes.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.
- [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility).
- [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html |
cbc34c3b5736-2 | - [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.
## Use Cases [\#](\#use-cases "Permalink to this headline")
Best practices and built-in implementations for common LangChain use cases:
- [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
- [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.
- [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
- [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
- [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them.
- [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.