id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
99281e45c72e-29
|
find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court9h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-30
|
Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-31
|
can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-32
|
predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-33
|
on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-34
|
Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: ยฉ ESPN Enterprises, Inc. All rights
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-35
|
ESPNCopyright: ยฉ ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-36
|
Document(page_content='GoogleSearch Images Maps Play YouTube News Gmail Drive More ยปWeb History | Settings | Sign in\xa0Advanced searchAdvertisingBusiness SolutionsAbout Googleยฉ 2023 - Privacy - Terms ', lookup_str='', metadata={'source': 'https://google.com'}, lookup_index=0)]
Loading a xml file, or using a different BeautifulSoup parser#
You can also look at SitemapLoader for an example of how to load a sitemap file, which is an example of using this feature.
loader = WebBaseLoader("https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml")
loader.default_parser = "xml"
docs = loader.load()
docs
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-37
|
[Document(page_content='\n\n10\nEnergy\n3\n2018-01-01\n2018-01-01\nfalse\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\nรยง 431.86\nSection รยง 431.86\n\nEnergy\nDEPARTMENT OF ENERGY\nENERGY CONSERVATION\nENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT\nCommercial Packaged Boilers\nTest Procedures\n\n\n\n\nยง\u2009431.86\nUniform test method for the measurement of energy efficiency of commercial packaged boilers.\n(a) Scope. This section provides test procedures, pursuant to the Energy Policy and Conservation Act (EPCA), as amended, which must be followed for measuring the combustion efficiency and/or thermal efficiency of a gas- or oil-fired commercial packaged boiler.\n(b) Testing and Calculations. Determine the thermal efficiency or combustion efficiency of commercial packaged boilers by conducting the appropriate test procedure(s) indicated in Table 1 of this section.\n\nTable 1โTest Requirements for Commercial Packaged Boiler Equipment Classes\n\nEquipment category\nSubcategory\nCertified rated inputBtu/h\n\nStandards efficiency
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-38
|
rated inputBtu/h\n\nStandards efficiency metric(ยง\u2009431.87)\n\nTest procedure(corresponding to\nstandards efficiency\nmetric required\nby ยง\u2009431.87)\n\n\n\nHot Water\nGas-fired\nโฅ300,000 and โค2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nGas-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nHot Water\nOil-fired\nโฅ300,000 and โค2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nHot Water\nOil-fired\n>2,500,000\nCombustion Efficiency\nAppendix A, Section 3.\n\n\nSteam\nGas-fired (all*)\nโฅ300,000 and โค2,500,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\nSteam\nGas-fired (all*)\n>2,500,000 and โค5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3 with Section 2.4.3.2.\n\n\n\nSteam\nOil-fired\nโฅ300,000 and โค2,500,000\nThermal Efficiency\nAppendix A, Section
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-39
|
Efficiency\nAppendix A, Section 2.\n\n\nSteam\nOil-fired\n>2,500,000 and โค5,000,000\nThermal Efficiency\nAppendix A, Section 2.\n\n\n\u2003\n\n>5,000,000\nThermal Efficiency\nAppendix A, Section 2.OR\nAppendix A, Section 3. with Section 2.4.3.2.\n\n\n\n*\u2009Equipment classes for commercial packaged boilers as of July 22, 2009 (74 FR 36355) distinguish between gas-fired natural draft and all other gas-fired (except natural draft).\n\n(c) Field Tests. The field test provisions of appendix A may be used only to test a unit of commercial packaged boiler with rated input greater than 5,000,000 Btu/h.\n[81 FR 89305, Dec. 9, 2016]\n\n\nEnergy Efficiency Standards\n\n', lookup_str='', metadata={'source': 'https://www.govinfo.gov/content/pkg/CFR-2018-title10-vol3/xml/CFR-2018-title10-vol3-sec431-86.xml'}, lookup_index=0)]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
99281e45c72e-40
|
previous
URL
next
Weather
Contents
Loading multiple webpages
Load multiple urls concurrently
Loading a xml file, or using a different BeautifulSoup parser
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html
|
111c15a0d05c-0
|
.ipynb
.pdf
Wikipedia
Contents
Installation
Examples
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.
Installation#
First, you need to install wikipedia python package.
#!pip install wikipedia
Examples#
WikipediaLoader has these arguments:
query: free text which used to find documents in Wikipedia
optional lang: default=โenโ. Use it to search in a specific language part of Wikipedia
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.
from langchain.document_loaders import WikipediaLoader
docs = WikipediaLoader(query='HUNTER X HUNTER', load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
docs[0].page_content[:400] # a content of the Document
previous
MediaWikiDump
next
YouTube transcripts
Contents
Installation
Examples
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/wikipedia.html
|
fc407d6f2e53-0
|
.ipynb
.pdf
Iugu
Iugu#
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import IuguLoader
from langchain.indexes import VectorstoreIndexCreator
The Iugu API requires an access token, which can be found inside of the Iugu dashboard.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
Documentation Documentation
iugu_loader = IuguLoader("charges")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([iugu_loader])
iugu_doc_retriever = index.vectorstore.as_retriever()
previous
Image captions
next
Joplin
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/iugu.html
|
c991aa017aab-0
|
.ipynb
.pdf
Copy Paste
Contents
Metadata
Copy Paste#
This notebook covers how to load a document object from something you just want to copy and paste. In this case, you donโt even need to use a DocumentLoader, but rather can just construct the Document directly.
from langchain.docstore.document import Document
text = "..... put the text you copy pasted here......"
doc = Document(page_content=text)
Metadata#
If you want to add metadata about the where you got this piece of text, you easily can with the metadata key.
metadata = {"source": "internet", "date": "Friday"}
doc = Document(page_content=text, metadata=metadata)
previous
CoNLL-U
next
CSV
Contents
Metadata
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/copypaste.html
|
9cd54eef1fa3-0
|
.ipynb
.pdf
Weather
Weather#
OpenWeatherMap is an open source weather service provider
This loader fetches the weather data from the OpenWeatherMapโs OneCall API, using the pyowm Python package. You must initialize the loader with your OpenWeatherMap API token and the names of the cities you want the weather data for.
from langchain.document_loaders import WeatherDataLoader
#!pip install pyowm
# Set API key either by passing it in to constructor directly
# or by setting the environment variable "OPENWEATHERMAP_API_KEY".
from getpass import getpass
OPENWEATHERMAP_API_KEY = getpass()
loader = WeatherDataLoader.from_params(['chennai','vellore'], openweathermap_api_key=OPENWEATHERMAP_API_KEY)
documents = loader.load()
documents
previous
WebBaseLoader
next
WhatsApp Chat
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/weather.html
|
540006efdf01-0
|
.ipynb
.pdf
URL
Contents
URL
Selenium URL Loader
Setup
Playwright URL Loader
Setup
URL#
This covers how to load HTML documents from a list of URLs into a document format that we can use downstream.
from langchain.document_loaders import UnstructuredURLLoader
urls = [
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023",
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023"
]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
Selenium URL Loader#
This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.
Using selenium allows us to load pages that require JavaScript to render.
Setup#
To use the SeleniumURLLoader, you will need to install selenium and unstructured.
from langchain.document_loaders import SeleniumURLLoader
urls = [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://goo.gl/maps/NDSHwePEyaHMFGwh8"
]
loader = SeleniumURLLoader(urls=urls)
data = loader.load()
Playwright URL Loader#
This covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.
As in the Selenium case, Playwright allows us to load pages that need JavaScript to render.
Setup#
To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:
# Install playwright
!pip install "playwright"
!pip install "unstructured"
!playwright install
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html
|
540006efdf01-1
|
!pip install "playwright"
!pip install "unstructured"
!playwright install
from langchain.document_loaders import PlaywrightURLLoader
urls = [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://goo.gl/maps/NDSHwePEyaHMFGwh8"
]
loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"])
data = loader.load()
previous
Unstructured File
next
WebBaseLoader
Contents
URL
Selenium URL Loader
Setup
Playwright URL Loader
Setup
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html
|
48003d7aeb38-0
|
.ipynb
.pdf
IMSDb
IMSDb#
IMSDb is the Internet Movie Script Database.
This covers how to load IMSDb webpages into a document format that we can use downstream.
from langchain.document_loaders import IMSDbLoader
loader = IMSDbLoader("https://imsdb.com/scripts/BlacKkKlansman.html")
data = loader.load()
data[0].page_content[:500]
'\n\r\n\r\n\r\n\r\n BLACKKKLANSMAN\r\n \r\n \r\n \r\n \r\n Written by\r\n\r\n Charlie Wachtel & David Rabinowitz\r\n\r\n and\r\n\r\n Kevin Willmott & Spike Lee\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n FADE IN:\r\n \r\n SCENE FROM "GONE WITH'
data[0].metadata
{'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}
previous
iFixit
next
MediaWikiDump
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html
|
0bbe39e5d9ed-0
|
.ipynb
.pdf
File Directory
Contents
Show a progress bar
Use multithreading
Change loader class
Auto detect file encodings with TextLoader
A. Default Behavior
B. Silent fail
C. Auto detect encodings
File Directory#
This covers how to use the DirectoryLoader to load all documents in a directory. Under the hood, by default this uses the UnstructuredLoader
from langchain.document_loaders import DirectoryLoader
We can use the glob parameter to control which files to load. Note that here it doesnโt load the .rst file or the .ipynb files.
loader = DirectoryLoader('../', glob="**/*.md")
docs = loader.load()
len(docs)
1
Show a progress bar#
By default a progress bar will not be shown. To show a progress bar, install the tqdm library (e.g. pip install tqdm), and set the show_progress parameter to True.
%pip install tqdm
loader = DirectoryLoader('../', glob="**/*.md", show_progress=True)
docs = loader.load()
Requirement already satisfied: tqdm in /Users/jon/.pyenv/versions/3.9.16/envs/microbiome-app/lib/python3.9/site-packages (4.65.0)
0it [00:00, ?it/s]
Use multithreading#
By default the loading happens in one thread. In order to utilize several threads set the use_multithreading flag to true.
loader = DirectoryLoader('../', glob="**/*.md", use_multithreading=True)
docs = loader.load()
Change loader class#
By default this uses the UnstructuredLoader class. However, you can change up the type of loader pretty easily.
from langchain.document_loaders import TextLoader
loader = DirectoryLoader('../', glob="**/*.md", loader_cls=TextLoader)
docs = loader.load()
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
0bbe39e5d9ed-1
|
docs = loader.load()
len(docs)
1
If you need to load Python source code files, use the PythonLoader.
from langchain.document_loaders import PythonLoader
loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader)
docs = loader.load()
len(docs)
691
Auto detect file encodings with TextLoader#
In this example we will see some strategies that can be useful when loading a big list of arbitrary files from a directory using the TextLoader class.
First to illustrate the problem, letโs try to load multiple text with arbitrary encodings.
path = '../../../../../tests/integration_tests/examples'
loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader)
A. Default Behavior#
loader.load()
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /data/source/langchain/langchain/document_loaders/text.py:29 in load โ
โ โ
โ 26 โ โ text = "" โ
โ 27 โ โ with open(self.file_path, encoding=self.encoding) as f: โ
โ 28 โ โ โ try: โ
โ โฑ 29 โ โ โ โ text = f.read() โ
โ 30 โ โ โ except UnicodeDecodeError as e: โ
โ 31 โ โ โ โ if self.autodetect_encoding: โ
โ 32 โ โ โ โ โ detected_encodings = self.detect_file_encodings() โ
โ โ
โ /home/spike/.pyenv/versions/3.9.11/lib/python3.9/codecs.py:322 in decode โ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
0bbe39e5d9ed-2
|
โ โ
โ 319 โ def decode(self, input, final=False): โ
โ 320 โ โ # decode input (taking the buffer into account) โ
โ 321 โ โ data = self.buffer + input โ
โ โฑ 322 โ โ (result, consumed) = self._buffer_decode(data, self.errors, final) โ
โ 323 โ โ # keep undecoded input until the next call โ
โ 324 โ โ self.buffer = data[consumed:] โ
โ 325 โ โ return result โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte
The above exception was the direct cause of the following exception:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in <module>:1 โ
โ โ
โ โฑ 1 loader.load() โ
โ 2 โ
โ โ
โ /data/source/langchain/langchain/document_loaders/directory.py:84 in load โ
โ โ
โ 81 โ โ โ โ โ โ if self.silent_errors: โ
โ 82 โ โ โ โ โ โ โ logger.warning(e) โ
โ 83 โ โ โ โ โ โ else: โ
โ โฑ 84 โ โ โ โ โ โ โ raise e โ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
0bbe39e5d9ed-3
|
โ 85 โ โ โ โ โ finally: โ
โ 86 โ โ โ โ โ โ if pbar: โ
โ 87 โ โ โ โ โ โ โ pbar.update(1) โ
โ โ
โ /data/source/langchain/langchain/document_loaders/directory.py:78 in load โ
โ โ
โ 75 โ โ โ if i.is_file(): โ
โ 76 โ โ โ โ if _is_visible(i.relative_to(p)) or self.load_hidden: โ
โ 77 โ โ โ โ โ try: โ
โ โฑ 78 โ โ โ โ โ โ sub_docs = self.loader_cls(str(i), **self.loader_kwargs).load() โ
โ 79 โ โ โ โ โ โ docs.extend(sub_docs) โ
โ 80 โ โ โ โ โ except Exception as e: โ
โ 81 โ โ โ โ โ โ if self.silent_errors: โ
โ โ
โ /data/source/langchain/langchain/document_loaders/text.py:44 in load โ
โ โ
โ 41 โ โ โ โ โ โ except UnicodeDecodeError: โ
โ 42 โ โ โ โ โ โ โ continue โ
โ 43 โ โ โ โ else: โ
โ โฑ 44 โ โ โ โ โ raise RuntimeError(f"Error loading {self.file_path}") from e โ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
0bbe39e5d9ed-4
|
โ 45 โ โ โ except Exception as e: โ
โ 46 โ โ โ โ raise RuntimeError(f"Error loading {self.file_path}") from e โ
โ 47 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt
The file example-non-utf8.txt uses a different encoding the load() function fails with a helpful message indicating which file failed decoding.
With the default behavior of TextLoader any failure to load any of the documents will fail the whole loading process and no documents are loaded.
B. Silent fail#
We can pass the parameter silent_errors to the DirectoryLoader to skip the files which could not be loaded and continue the load process.
loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, silent_errors=True)
docs = loader.load()
Error loading ../../../../../tests/integration_tests/examples/example-non-utf8.txt
doc_sources = [doc.metadata['source'] for doc in docs]
doc_sources
['../../../../../tests/integration_tests/examples/whatsapp_chat.txt',
'../../../../../tests/integration_tests/examples/example-utf8.txt']
C. Auto detect encodings#
We can also ask TextLoader to auto detect the file encoding before failing, by passing the autodetect_encoding to the loader class.
text_loader_kwargs={'autodetect_encoding': True}
loader = DirectoryLoader(path, glob="**/*.txt", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)
docs = loader.load()
doc_sources = [doc.metadata['source'] for doc in docs]
doc_sources
['../../../../../tests/integration_tests/examples/example-non-utf8.txt',
'../../../../../tests/integration_tests/examples/whatsapp_chat.txt',
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
0bbe39e5d9ed-5
|
'../../../../../tests/integration_tests/examples/whatsapp_chat.txt',
'../../../../../tests/integration_tests/examples/example-utf8.txt']
previous
Facebook Chat
next
HTML
Contents
Show a progress bar
Use multithreading
Change loader class
Auto detect file encodings with TextLoader
A. Default Behavior
B. Silent fail
C. Auto detect encodings
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html
|
68364dabeb95-0
|
.ipynb
.pdf
Google Drive
Contents
Prerequisites
๐ง Instructions for ingesting your Google Docs data
Google Drive#
Google Drive is a file storage and synchronization service developed by Google.
This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported.
Prerequisites#
Create a Google Cloud project or use an existing project
Enable the Google Drive API
Authorize credentials for desktop app
pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
๐ง Instructions for ingesting your Google Docs data#
By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_path keyword argument. Same thing with token.json - token_path. Note that token.json will be created automatically the first time you use the loader.
GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL:
Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5"
Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw"
!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib
from langchain.document_loaders import GoogleDriveLoader
loader = GoogleDriveLoader(
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html
|
68364dabeb95-1
|
from langchain.document_loaders import GoogleDriveLoader
loader = GoogleDriveLoader(
folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5",
# Optional: configure whether to recursively fetch files from subfolders. Defaults to False.
recursive=False
)
docs = loader.load()
When you pass a folder_id by default all files of type document, sheet and pdf are loaded. You can modify this behaviour by passing a file_types argument
loader = GoogleDriveLoader(
folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5",
file_types=["document", "sheet"]
recursive=False
)
previous
Google Cloud Storage File
next
Image captions
Contents
Prerequisites
๐ง Instructions for ingesting your Google Docs data
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html
|
9a4de40c5baa-0
|
.ipynb
.pdf
PDF
Contents
Using PyPDF
Using MathPix
Using Unstructured
Retain Elements
Fetching remote PDFs using Unstructured
Using PyPDFium2
Using PDFMiner
Using PDFMiner to generate HTML text
Using PyMuPDF
PyPDF Directory
Using pdfplumber
PDF#
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.
This covers how to load PDF documents into the Document format that we use downstream.
Using PyPDF#
Load PDF using pypdf into array of documents, where each document contains the page content and metadata with page number.
!pip install pypdf
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("example_data/layout-parser-paper.pdf")
pages = loader.load_and_split()
pages[0]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-1
|
Document(page_content='LayoutParser : A Uni\x0ced Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\[email protected]\n2Brown University\nruochen [email protected]\n3Harvard University\nfmelissadell,jacob carlson [email protected]\n4University of Washington\[email protected]\n5University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con\x0cgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne\x0borts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-2
|
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ยทDeep Learning ยทLayout Analysis\nยทCharacter Recognition ยทOpen Source library ยทToolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-3
|
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': 0})
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-4
|
An advantage of this approach is that documents can be retrieved with page numbers.
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key: ยทยทยทยทยทยทยทยท
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())
docs = faiss_index.similarity_search("How will the community be engaged?", k=2)
for doc in docs:
print(str(doc.metadata["page"]) + ":", doc.page_content[:300])
9: 10 Z. Shen et al.
Fig. 4: Illustration of (a) the original historical Japanese document with layout
detection results and (b) a recreated version of the document image that achieves
much better character recognition recall. The reorganization algorithm rearranges
the tokens based on the their detect
3: 4 Z. Shen et al.
Efficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images
T h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y ou
Using MathPix#
Inspired by Daniel Grossโs https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21
from langchain.document_loaders import MathpixPDFLoader
loader = MathpixPDFLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-5
|
loader = MathpixPDFLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
Using Unstructured#
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
Retain Elements#
Under the hood, Unstructured creates different โelementsโ for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf", mode="elements")
data = loader.load()
data[0]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-6
|
Document(page_content='LayoutParser: A Uni๏ฌed Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (๏ฟฝ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con๏ฌgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne๏ฌorts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-7
|
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis ยท Deep Learning ยท Layout Analysis\nยท Character Recognition ยท Open Source library ยท Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-8
|
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi๏ฌcation [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-9
|
Fetching remote PDFs using Unstructured#
This covers how to load online pdfs into a document format that we can use downstream. This can be used for various online pdf sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/
Note: all other pdf loaders can also be used to fetch remote PDFs, but OnlinePDFLoader is a legacy function, and works specifically with UnstructuredPDFLoader.
from langchain.document_loaders import OnlinePDFLoader
loader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf")
data = loader.load()
print(data)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-10
|
[Document(page_content='A WEAK ( k, k ) -LEFSCHETZ THEOREM FOR PROJECTIVE TORIC ORBIFOLDS\n\nWilliam D. Montoya\n\nInstituto de Matemยดatica, Estatยดฤฑstica e Computaยธcหao Cientยดฤฑ๏ฌca,\n\nIn [3] we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection subvariety X in a projective toric orbifold P d ฮฃ with d + s = 2 ( k + 1 ) the Hodge conjecture holds, that is, every ( p, p ) -cohomology class, under the Poincarยดe duality is a rational linear combination of fundamental classes of algebraic subvarieties of X . The proof of the above-mentioned result relies, for p โ d + 1 โ s , on a Lefschetz\n\nKeywords: (1,1)- Lefschetz theorem, Hodge conjecture, toric varieties, complete intersection Email: [email protected]\n\ntheorem ([7]) and the Hard Lefschetz theorem for projective orbifolds ([11]). When p =
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-11
|
theorem for projective orbifolds ([11]). When p = d + 1 โ s the proof relies on the Cayley trick, a trick which associates to X a quasi-smooth hypersurface Y in a projective vector bundle, and the Cayley Proposition (4.3) which gives an isomorphism of some primitive cohomologies (4.2) of X and Y . The Cayley trick, following the philosophy of Mavlyutov in [7], reduces results known for quasi-smooth hypersurfaces to quasi-smooth intersection subvarieties. The idea in this paper goes the other way around, we translate some results for quasi-smooth intersection subvarieties to\n\nAcknowledgement. I thank Prof. Ugo Bruzzo and Tiago Fonseca for useful discus- sions. I also acknowledge support from FAPESP postdoctoral grant No. 2019/23499-7.\n\nLet M be a free abelian group of rank d , let N = Hom ( M, Z ) , and N R = N โ Z R .\n\nif there exist k
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-12
|
N โ Z R .\n\nif there exist k linearly independent primitive elements e\n\n, . . . , e k โ N such that ฯ = { ยต\n\ne\n\n+ โฏ + ยต k e k } . โข The generators e i are integral if for every i and any nonnegative rational number ยต the product ยตe i is in N only if ยต is an integer. โข Given two rational simplicial cones ฯ , ฯ โฒ one says that ฯ โฒ is a face of ฯ ( ฯ โฒ < ฯ ) if the set of integral generators of ฯ โฒ is a subset of the set of integral generators of ฯ . โข A ๏ฌnite set ฮฃ = { ฯ\n\n, . . . , ฯ t } of rational simplicial cones is called a rational simplicial complete d -dimensional fan if:\n\nall faces of cones in ฮฃ are in ฮฃ ;\n\nif ฯ, ฯ โฒ โ ฮฃ then ฯ โฉ ฯ โฒ < ฯ and ฯ โฉ ฯ โฒ < ฯ โฒ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-13
|
< ฯ and ฯ โฉ ฯ โฒ < ฯ โฒ ;\n\nN R = ฯ\n\nโช โ
โ
โ
โช ฯ t .\n\nA rational simplicial complete d -dimensional fan ฮฃ de๏ฌnes a d -dimensional toric variety P d ฮฃ having only orbifold singularities which we assume to be projective. Moreover, T โถ = N โ Z C โ โ ( C โ ) d is the torus action on P d ฮฃ . We denote by ฮฃ ( i ) the i -dimensional cones\n\nFor a cone ฯ โ ฮฃ, ห ฯ is the set of 1-dimensional cone in ฮฃ that are not contained in ฯ\n\nand x ห ฯ โถ = โ ฯ โ ห ฯ x ฯ is the associated monomial in S .\n\nDe๏ฌnition 2.2. The irrelevant ideal of P d ฮฃ is the monomial ideal B ฮฃ โถ =< x ห ฯ โฃ ฯ โ ฮฃ > and the zero locus Z ( ฮฃ ) โถ = V (
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-14
|
locus Z ( ฮฃ ) โถ = V ( B ฮฃ ) in the a๏ฌne space A d โถ = Spec ( S ) is the irrelevant locus.\n\nProposition 2.3 (Theorem 5.1.11 [5]) . The toric variety P d ฮฃ is a categorical quotient A d โ Z ( ฮฃ ) by the group Hom ( Cl ( ฮฃ ) , C โ ) and the group action is induced by the Cl ( ฮฃ ) - grading of S .\n\nNow we give a brief introduction to complex orbifolds and we mention the needed theorems for the next section. Namely: de Rham theorem and Dolbeault theorem for complex orbifolds.\n\nDe๏ฌnition 2.4. A complex orbifold of complex dimension d is a singular complex space whose singularities are locally isomorphic to quotient singularities C d / G , for ๏ฌnite sub- groups G โ Gl ( d, C ) .\n\nDe๏ฌnition 2.5. A di๏ฌerential form on a complex orbifold
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-15
|
A di๏ฌerential form on a complex orbifold Z is de๏ฌned locally at z โ Z as a G -invariant di๏ฌerential form on C d where G โ Gl ( d, C ) and Z is locally isomorphic to d\n\nRoughly speaking the local geometry of orbifolds reduces to local G -invariant geometry.\n\nWe have a complex of di๏ฌerential forms ( A โ ( Z ) , d ) and a double complex ( A โ , โ ( Z ) , โ, ยฏ โ ) of bigraded di๏ฌerential forms which de๏ฌne the de Rham and the Dolbeault cohomology groups (for a ๏ฌxed p โ N ) respectively:\n\n(1,1)-Lefschetz theorem for projective toric orbifolds\n\nDe๏ฌnition 3.1. A subvariety X โ P d ฮฃ is quasi-smooth if V ( I X ) โ A #ฮฃ ( 1 ) is smooth outside\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-16
|
. Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub-\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub- varieties are quasi-smooth subvarieties (see [2] or [7] for more details).\n\nRemark 3.3 . Quasi-smooth subvarieties are suborbifolds of P d ฮฃ in the sense of Satake in [8]. Intuitively speaking they are subvarieties whose only singularities come from the ambient\n\nProof. From the exponential short exact sequence\n\nwe have a long exact sequence in cohomology\n\nH 1 (O โ X ) โ H 2 ( X, Z ) โ H 2 (O X ) โ H 0 , 2 ( X )\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now, it is enough to prove the commutativity of the next diagram\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now,\n\nH 2 ( X, Z ) / / H 2 ( X, O X ) โ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-17
|
/ H 2 ( X, O X ) โ Dolbeault H 2 ( X, C ) deRham โ H 2 dR ( X, C ) / / H 0 , 2 ยฏ โ ( X )\n\nof the proof follows as the ( 1 , 1 ) -Lefschetz theorem in [6].\n\nRemark 3.5 . For k = 1 and P d ฮฃ as the projective space, we recover the classical ( 1 , 1 ) - Lefschetz theorem.\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we get an isomorphism of cohomologies :\n\ngiven by the Lefschetz morphism and since it is a morphism of Hodge structures, we have:\n\nH 1 , 1 ( X, Q ) โ H dim X โ 1 , dim X โ 1 ( X, Q )\n\nCorollary 3.6. If the dimension of X is 1 , 2 or
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-18
|
If the dimension of X is 1 , 2 or 3 . The Hodge conjecture holds on X\n\nProof. If the dim C X = 1 the result is clear by the Hard Lefschetz theorem for projective orbifolds. The dimension 2 and 3 cases are covered by Theorem 3.5 and the Hard Lefschetz.\n\nCayley trick and Cayley proposition\n\nThe Cayley trick is a way to associate to a quasi-smooth intersection subvariety a quasi- smooth hypersurface. Let L 1 , . . . , L s be line bundles on P d ฮฃ and let ฯ โถ P ( E ) โ P d ฮฃ be the projective space bundle associated to the vector bundle E = L 1 โ โฏ โ L s . It is known that P ( E ) is a ( d + s โ 1 ) -dimensional simplicial toric variety whose fan depends on the degrees of the line bundles and the fan ฮฃ. Furthermore, if the Cox ring, without considering the grading, of P
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-19
|
Cox ring, without considering the grading, of P d ฮฃ is C [ x 1 , . . . , x m ] then the Cox ring of P ( E ) is\n\nMoreover for X a quasi-smooth intersection subvariety cut o๏ฌ by f 1 , . . . , f s with deg ( f i ) = [ L i ] we relate the hypersurface Y cut o๏ฌ by F = y 1 f 1 + โ
โ
โ
+ y s f s which turns out to be quasi-smooth. For more details see Section 2 in [7].\n\nWe will denote P ( E ) as P d + s โ 1 ฮฃ ,X to keep track of its relation with X and P d ฮฃ .\n\nThe following is a key remark.\n\nRemark 4.1 . There is a morphism ฮน โถ X โ Y โ P d + s โ 1 ฮฃ ,X . Moreover every point z โถ = ( x, y ) โ Y with y โ 0 has
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-20
|
y ) โ Y with y โ 0 has a preimage. Hence for any subvariety W = V ( I W ) โ X โ P d ฮฃ there exists W โฒ โ Y โ P d + s โ 1 ฮฃ ,X such that ฯ ( W โฒ ) = W , i.e., W โฒ = { z = ( x, y ) โฃ x โ W } .\n\nFor X โ P d ฮฃ a quasi-smooth intersection variety the morphism in cohomology induced by the inclusion i โ โถ H d โ s ( P d ฮฃ , C ) โ H d โ s ( X, C ) is injective by Proposition 1.4 in [7].\n\nDe๏ฌnition 4.2. The primitive cohomology of H d โ s prim ( X ) is the quotient H d โ s ( X, C )/ i โ ( H d โ s ( P d ฮฃ , C )) and H d โ s prim ( X, Q ) with rational
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-21
|
โ s prim ( X, Q ) with rational coe๏ฌcients.\n\nH d โ s ( P d ฮฃ , C ) and H d โ s ( X, C ) have pure Hodge structures, and the morphism i โ is com- patible with them, so that H d โ s prim ( X ) gets a pure Hodge structure.\n\nThe next Proposition is the Cayley proposition.\n\nProposition 4.3. [Proposition 2.3 in [3] ] Let X = X 1 โฉโ
โ
โ
โฉ X s be a quasi-smooth intersec- tion subvariety in P d ฮฃ cut o๏ฌ by homogeneous polynomials f 1 . . . f s . Then for p โ d + s โ 1 2 , d + s โ 3 2\n\nRemark 4.5 . The above isomorphisms are also true with rational coe๏ฌcients since H โ ( X, C ) = H โ ( X, Q ) โ Q C . See the beginning of Section 7.1 in
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-22
|
C . See the beginning of Section 7.1 in [10] for more details.\n\nTheorem 5.1. Let Y = { F = y 1 f 1 + โฏ + y k f k = 0 } โ P 2 k + 1 ฮฃ ,X be the quasi-smooth hypersurface associated to the quasi-smooth intersection surface X = X f 1 โฉ โ
โ
โ
โฉ X f k โ P k + 2 ฮฃ . Then on Y the Hodge conjecture holds.\n\nthe Hodge conjecture holds.\n\nProof. If H k,k prim ( X, Q ) = 0 we are done. So let us assume H k,k prim ( X, Q ) โ 0. By the Cayley proposition H k,k prim ( Y, Q ) โ H 1 , 1 prim ( X, Q ) and by the ( 1 , 1 ) -Lefschetz theorem for projective\n\ntoric orbifolds there is a non-zero algebraic basis ฮป C 1 , . . . , ฮป C n with
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-23
|
1 , . . . , ฮป C n with rational coe๏ฌcients of H 1 , 1 prim ( X, Q ) , that is, there are n โถ = h 1 , 1 prim ( X, Q ) algebraic curves C 1 , . . . , C n in X such that under the Poincarยดe duality the class in homology [ C i ] goes to ฮป C i , [ C i ] โฆ ฮป C i . Recall that the Cox ring of P k + 2 is contained in the Cox ring of P 2 k + 1 ฮฃ ,X without considering the grading. Considering the grading we have that if ฮฑ โ Cl ( P k + 2 ฮฃ ) then ( ฮฑ, 0 ) โ Cl ( P 2 k + 1 ฮฃ ,X ) . So the polynomials de๏ฌning C i โ P k + 2 ฮฃ can be interpreted in P 2 k + 1 X, ฮฃ but with di๏ฌerent degree. Moreover, by Remark 4.1 each
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-24
|
degree. Moreover, by Remark 4.1 each C i is contained in Y = { F = y 1 f 1 + โฏ + y k f k = 0 } and\n\nfurthermore it has codimension k .\n\nClaim: { C i } ni = 1 is a basis of prim ( ) . It is enough to prove that ฮป C i is di๏ฌerent from zero in H k,k prim ( Y, Q ) or equivalently that the cohomology classes { ฮป C i } ni = 1 do not come from the ambient space. By contradiction, let us assume that there exists a j and C โ P 2 k + 1 ฮฃ ,X such that ฮป C โ H k,k ( P 2 k + 1 ฮฃ ,X , Q ) with i โ ( ฮป C ) = ฮป C j or in terms of homology there exists a ( k + 2 ) -dimensional algebraic subvariety V โ P 2 k + 1 ฮฃ ,X such that V โฉ Y = C j so
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-25
|
,X such that V โฉ Y = C j so they are equal as a homology class of P 2 k + 1 ฮฃ ,X ,i.e., [ V โฉ Y ] = [ C j ] . It is easy to check that ฯ ( V ) โฉ X = C j as a subvariety of P k + 2 ฮฃ where ฯ โถ ( x, y ) โฆ x . Hence [ ฯ ( V ) โฉ X ] = [ C j ] which is equivalent to say that ฮป C j comes from P k + 2 ฮฃ which contradicts the choice of [ C j ] .\n\nRemark 5.2 . Into the proof of the previous theorem, the key fact was that on X the Hodge conjecture holds and we translate it to Y by contradiction. So, using an analogous argument we have:\n\nargument we have:\n\nProposition 5.3. Let Y = { F = y 1 f s +โฏ+ y s f s = 0 } โ P 2 k + 1 ฮฃ
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-26
|
0 } โ P 2 k + 1 ฮฃ ,X be the quasi-smooth hypersurface associated to a quasi-smooth intersection subvariety X = X f 1 โฉ โ
โ
โ
โฉ X f s โ P d ฮฃ such that d + s = 2 ( k + 1 ) . If the Hodge conjecture holds on X then it holds as well on Y .\n\nCorollary 5.4. If the dimension of Y is 2 s โ 1 , 2 s or 2 s + 1 then the Hodge conjecture holds on Y .\n\nProof. By Proposition 5.3 and Corollary 3.6.\n\n[\n\n] Angella, D. Cohomologies of certain orbifolds. Journal of Geometry and Physics\n\n(\n\n),\n\nโ\n\n[\n\n] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal\n\n,\n\n(Aug\n\n). [\n\n] Bruzzo, U., and Montoya, W. On the Hodge
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-27
|
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. Sหao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n). [\n\n] Caramello Jr, F. C. Introduction to orbifolds. a\n\niv:\n\nv\n\n(\n\n). [\n\n] Cox, D., Little, J., and Schenck, H. Toric varieties, vol.\n\nAmerican Math- ematical Soc.,\n\n[\n\n] Griffiths, P., and Harris, J. Principles of Algebraic Geometry. John Wiley & Sons, Ltd,\n\n[\n\n] Mavlyutov, A. R. Cohomology of complete intersections in toric varieties. Pub- lished in Paci๏ฌc J. of Math.\n\nNo.\n\n(\n\n),\n\nโ\n\n[\n\n] Satake, I. On a Generalization of the Notion of Manifold. Proceedings of the National Academy of Sciences of the United States of America\n\n,\n\n(\n\n),\n\nโ\n\n[\n\n] Steenbrink,
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-28
|
Steenbrink, J. H. M. Intersection form for quasi-homogeneous singularities. Com- positio Mathematica\n\n,\n\n(\n\n),\n\nโ\n\n[\n\n] Voisin, C. Hodge Theory and Complex Algebraic Geometry I, vol.\n\nof Cambridge Studies in Advanced Mathematics . Cambridge University Press,\n\n[\n\n] Wang, Z. Z., and Zaffran, D. A remark on the Hard Lefschetz theorem for Kยจahler orbifolds. Proceedings of the American Mathematical Society\n\n,\n\n(Aug\n\n).\n\n[2] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal 75, 2 (Aug 1994).\n\n[\n\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. Sหao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n).\n\n[3] Bruzzo, U., and Montoya, W. On the Hodge
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-29
|
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. Sหao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (2021).\n\nA. R. Cohomology of complete intersections in toric varieties. Pub-', lookup_str='', metadata={'source': '/var/folders/ph/hhm7_zyx4l13k3v8z02dwp1w0000gn/T/tmpgq0ckaja/online_file.pdf'}, lookup_index=0)]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-30
|
Using PyPDFium2#
from langchain.document_loaders import PyPDFium2Loader
loader = PyPDFium2Loader("example_data/layout-parser-paper.pdf")
data = loader.load()
Using PDFMiner#
from langchain.document_loaders import PDFMinerLoader
loader = PDFMinerLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
Using PDFMiner to generate HTML text#
This can be helpful for chunking texts semantically into sections as the output html content can be parsed via BeautifulSoup to get more structured and rich information about font size, page numbers, pdf headers/footers, etc.
from langchain.document_loaders import PDFMinerPDFasHTMLLoader
loader = PDFMinerPDFasHTMLLoader("example_data/layout-parser-paper.pdf")
data = loader.load()[0] # entire pdf is loaded as a single Document
from bs4 import BeautifulSoup
soup = BeautifulSoup(data.page_content,'html.parser')
content = soup.find_all('div')
import re
cur_fs = None
cur_text = ''
snippets = [] # first collect all snippets that have the same font size
for c in content:
sp = c.find('span')
if not sp:
continue
st = sp.get('style')
if not st:
continue
fs = re.findall('font-size:(\d+)px',st)
if not fs:
continue
fs = int(fs[0])
if not cur_fs:
cur_fs = fs
if fs == cur_fs:
cur_text += c.text
else:
snippets.append((cur_text,cur_fs))
cur_fs = fs
cur_text = c.text
snippets.append((cur_text,cur_fs))
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-31
|
cur_text = c.text
snippets.append((cur_text,cur_fs))
# Note: The above logic is very straightforward. One can also add more strategies such as removing duplicate snippets (as
# headers/footers in a PDF appear on multiple pages so if we find duplicatess safe to assume that it is redundant info)
from langchain.docstore.document import Document
cur_idx = -1
semantic_snippets = []
# Assumption: headings have higher font size than their respective content
for s in snippets:
# if current snippet's font size > previous section's heading => it is a new heading
if not semantic_snippets or s[1] > semantic_snippets[cur_idx].metadata['heading_font']:
metadata={'heading':s[0], 'content_font': 0, 'heading_font': s[1]}
metadata.update(data.metadata)
semantic_snippets.append(Document(page_content='',metadata=metadata))
cur_idx += 1
continue
# if current snippet's font size <= previous section's content => content belongs to the same section (one can also create
# a tree like structure for sub sections if needed but that may require some more thinking and may be data specific)
if not semantic_snippets[cur_idx].metadata['content_font'] or s[1] <= semantic_snippets[cur_idx].metadata['content_font']:
semantic_snippets[cur_idx].page_content += s[0]
semantic_snippets[cur_idx].metadata['content_font'] = max(s[1], semantic_snippets[cur_idx].metadata['content_font'])
continue
# if current snippet's font size > previous section's content but less tha previous section's heading than also make a new
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-32
|
# section (e.g. title of a pdf will have the highest font size but we don't want it to subsume all sections)
metadata={'heading':s[0], 'content_font': 0, 'heading_font': s[1]}
metadata.update(data.metadata)
semantic_snippets.append(Document(page_content='',metadata=metadata))
cur_idx += 1
semantic_snippets[4]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-33
|
Document(page_content='Recently, various DL models and datasets have been developed for layout analysis\ntasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\ntation tasks on historical documents. Object detection-based methods like Faster\nR-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\nand detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\nbeen used in table detection [27]. However, these models are usually implemented\nindividually and there is no uni๏ฌed framework to load and use such models.\nThere has been a surge of interest in creating open-source tools for document\nimage processing: a search of document image analysis in Github leads to 5M\nrelevant code pieces 6; yet most of them rely on traditional rule-based methods\nor provide limited functionalities. The closest prior research to our work is the\nOCR-D project7, which also tries to build a complete toolkit for DIA. However,\nsimilar to the platform developed by Neudecker et al. [21], it is
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-34
|
by Neudecker et al. [21], it is designed for\nanalyzing historical documents, and provides no supports for recent DL models.\nThe DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\ndocuments via analyzing the stored PDF data. Repositories like DeepLayout9\nand Detectron2-PubLayNet10 are individual deep learning models trained on\nlayout analysis datasets without support for the full DIA pipeline. The Document\nAnalysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\naim to improve the reproducibility of DIA methods (or DL models), yet they\nare not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\npaddleOCR12 usually do not come with comprehensive functionalities for other\nDIA tasks like layout analysis.\nRecent years have also seen numerous e๏ฌorts to create libraries for promoting\nreproducibility and reusability in the ๏ฌeld of DL. Libraries like Dectectron2 [35],\n6 The number shown is obtained by specifying the search type as โcodeโ.\n7 https://ocr-d.de/en/about\n8
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-35
|
type as โcodeโ.\n7 https://ocr-d.de/en/about\n8 https://github.com/BobLd/DocumentLayoutAnalysis\n9 https://github.com/leonlulu/DeepLayout\n10 https://github.com/hpanwar08/detectron2\n11 https://github.com/JaidedAI/EasyOCR\n12 https://github.com/PaddlePaddle/PaddleOCR\n4\nZ. Shen et al.\nFig. 1: The overall architecture of LayoutParser. For an input document image,\nthe core LayoutParser library provides a set of o๏ฌ-the-shelf tools for layout\ndetection, OCR, visualization, and storage, backed by a carefully designed layout\ndata structure. LayoutParser also supports high level customization via e๏ฌcient\nlayout annotation and model training functions. These improve model accuracy\non the target samples. The community platform enables the easy sharing of DIA\nmodels and whole digitization pipelines to promote reusability and reproducibility.\nA collection of detailed documentation, tutorials and exemplar projects make\nLayoutParser easy to learn and use.\nAllenNLP [8] and transformers [34] have provided the community with complete\nDL-based support for developing and deploying models for general computer\nvision and natural language
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-36
|
and deploying models for general computer\nvision and natural language processing problems. LayoutParser, on the other\nhand, specializes speci๏ฌcally in DIA tasks. LayoutParser is also equipped with a\ncommunity platform inspired by established model hubs such as Torch Hub [23]\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\nfull document processing pipelines that are unique to DIA tasks.\nThere have been a variety of document data collections to facilitate the\ndevelopment of DL models. Some examples include PRImA [3](magazine layouts),\nPubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\npapers), Newspaper Navigator Dataset [16, 17](newspaper ๏ฌgure layouts) and\nHJDataset [31](historical Japanese document layouts). A spectrum of models\ntrained on these datasets are currently available in the LayoutParser model zoo\nto support di๏ฌerent use cases.\n', metadata={'heading': '2 Related Work\n', 'content_font': 9, 'heading_font': 11, 'source': 'example_data/layout-parser-paper.pdf'})
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-37
|
Using PyMuPDF#
This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page.
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
data[0]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-38
|
Document(page_content='LayoutParser: A Uni๏ฌed Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (๏ฟฝ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con๏ฌgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne๏ฌorts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-39
|
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis ยท Deep Learning ยท Layout Analysis\nยท Character Recognition ยท Open Source library ยท Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-40
|
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi๏ฌcation [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-41
|
Additionally, you can pass along any of the options from the PyMuPDF documentation as keyword arguments in the load call, and it will be pass along to the get_text() call.
PyPDF Directory#
Load PDFs from directory
from langchain.document_loaders import PyPDFDirectoryLoader
loader = PyPDFDirectoryLoader("example_data/")
docs = loader.load()
Using pdfplumber#
Like PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page.
from langchain.document_loaders import PDFPlumberLoader
loader = PDFPlumberLoader("example_data/layout-parser-paper.pdf")
data = loader.load()
data[0]
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-42
|
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\n1202 [email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\nnuJ {melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n12 5 University of Waterloo\[email protected]\n]VC.sc[\nAbstract. Recentadvancesindocumentimageanalysis(DIA)havebeen\nprimarily driven by the application of neural networks. Ideally, research\noutcomescouldbeeasilydeployedinproductionandextendedforfurther\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\n2v84351.3012:viXra portantinnovationsbyawideaudience.Thoughtherehavebeenon-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopmentindisciplineslikenaturallanguageprocessingandcomputer\nvision, none of them are optimized for challenges in the domain of
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-43
|
of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademicresearchacross awiderangeof disciplinesinthesocialsciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-\ntection,characterrecognition,andmanyotherdocumentprocessingtasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: DocumentImageAnalysisยทDeepLearningยทLayoutAnalysis\nยท Character Recognition ยท Open Source library ยท Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocumentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,', metadata={'source': 'example_data/layout-parser-paper.pdf', 'file_path':
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-44
|
metadata={'source': 'example_data/layout-parser-paper.pdf', 'file_path': 'example_data/layout-parser-paper.pdf', 'page': 1, 'total_pages': 16, 'Author': '', 'CreationDate': 'D:20210622012710Z', 'Creator': 'LaTeX with hyperref', 'Keywords': '', 'ModDate': 'D:20210622012710Z', 'PTEX.Fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'Producer': 'pdfTeX-1.40.21', 'Subject': '', 'Title': '', 'Trapped': 'False'})
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
9a4de40c5baa-45
|
previous
Pandas DataFrame
next
Sitemap
Contents
Using PyPDF
Using MathPix
Using Unstructured
Retain Elements
Fetching remote PDFs using Unstructured
Using PyPDFium2
Using PDFMiner
Using PDFMiner to generate HTML text
Using PyMuPDF
PyPDF Directory
Using pdfplumber
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html
|
ffe7f7f1d242-0
|
.ipynb
.pdf
Telegram
Telegram#
Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.
This notebook covers how to load data from Telegram into a format that can be ingested into LangChain.
from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoader
loader = TelegramChatFileLoader("example_data/telegram.json")
loader.load()
[Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace รฐลธยงยค รฐลธ\x8dโ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})]
TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account.
You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps
chat_entity โ recommended to be the entity of a channel.
loader = TelegramChatApiLoader(
chat_entity="<CHAT_URL>", # recommended to use Entity here
api_hash="<API HASH >",
api_id="<API_ID>",
user_name ="", # needed only for caching the session.
)
loader.load()
previous
Subtitle
next
TOML
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/telegram.html
|
650f5069bf93-0
|
.ipynb
.pdf
Microsoft Word
Contents
Using Docx2txt
Using Unstructured
Retain Elements
Microsoft Word#
Microsoft Word is a word processor developed by Microsoft.
This covers how to load Word documents into a document format that we can use downstream.
Using Docx2txt#
Load .docx using Docx2txt into a document.
from langchain.document_loaders import Docx2txtLoader
loader = Docx2txtLoader("example_data/fake.docx")
data = loader.load()
data
[Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})]
Using Unstructured#
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader("example_data/fake.docx")
data = loader.load()
data
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)]
Retain Elements#
Under the hood, Unstructured creates different โelementsโ for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements")
data = loader.load()
data[0]
Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0)
previous
Microsoft PowerPoint
next
Open Document Format (ODT)
Contents
Using Docx2txt
Using Unstructured
Retain Elements
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_word.html
|
d7cb45f485d4-0
|
.ipynb
.pdf
DuckDB
Contents
Specifying Which Columns are Content vs Metadata
Adding Source to Metadata
DuckDB#
DuckDB is an in-process SQL OLAP database management system.
Load a DuckDB query with one document per row.
#!pip install duckdb
from langchain.document_loaders import DuckDBLoader
%%file example.csv
Team,Payroll
Nationals,81.34
Reds,82.20
Writing example.csv
loader = DuckDBLoader("SELECT * FROM read_csv_auto('example.csv')")
data = loader.load()
print(data)
[Document(page_content='Team: Nationals\nPayroll: 81.34', metadata={}), Document(page_content='Team: Reds\nPayroll: 82.2', metadata={})]
Specifying Which Columns are Content vs Metadata#
loader = DuckDBLoader(
"SELECT * FROM read_csv_auto('example.csv')",
page_content_columns=["Team"],
metadata_columns=["Payroll"]
)
data = loader.load()
print(data)
[Document(page_content='Team: Nationals', metadata={'Payroll': 81.34}), Document(page_content='Team: Reds', metadata={'Payroll': 82.2})]
Adding Source to Metadata#
loader = DuckDBLoader(
"SELECT Team, Payroll, Team As source FROM read_csv_auto('example.csv')",
metadata_columns=["source"]
)
data = loader.load()
print(data)
[Document(page_content='Team: Nationals\nPayroll: 81.34\nsource: Nationals', metadata={'source': 'Nationals'}), Document(page_content='Team: Reds\nPayroll: 82.2\nsource: Reds', metadata={'source': 'Reds'})]
previous
Docugami
next
Figma
Contents
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html
|
d7cb45f485d4-1
|
previous
Docugami
next
Figma
Contents
Specifying Which Columns are Content vs Metadata
Adding Source to Metadata
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/duckdb.html
|
87ea76178620-0
|
.ipynb
.pdf
AWS S3 Directory
Contents
Specifying a prefix
AWS S3 Directory#
Amazon Simple Storage Service (Amazon S3) is an object storage service
AWS S3 Directory
This covers how to load document objects from an AWS S3 Directory object.
#!pip install boto3
from langchain.document_loaders import S3DirectoryLoader
loader = S3DirectoryLoader("testing-hwc")
loader.load()
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = S3DirectoryLoader("testing-hwc", prefix="fake")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
previous
Apify Dataset
next
AWS S3 File
Contents
Specifying a prefix
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/aws_s3_directory.html
|
b8359a59eeab-0
|
.ipynb
.pdf
Obsidian
Obsidian#
Obsidian is a powerful and extensible knowledge base
that works on top of your local folder of plain text files.
This notebook covers how to load documents from an Obsidian database.
Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.
Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the documentโs metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)
from langchain.document_loaders import ObsidianLoader
loader = ObsidianLoader("<path-to-obsidian>")
docs = loader.load()
previous
Notion DB 1/2
next
Psychic
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/obsidian.html
|
ec4e6e7b422a-0
|
.ipynb
.pdf
ReadTheDocs Documentation
ReadTheDocs Documentation#
Read the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.
This notebook covers how to load content from HTML that was generated as part of a Read-The-Docs build.
For an example of this in the wild, see here.
This assumes that the HTML has already been scraped into a folder. This can be done by uncommenting and running the following command
#!pip install beautifulsoup4
#!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/
from langchain.document_loaders import ReadTheDocsLoader
loader = ReadTheDocsLoader("rtdocs", features='html.parser')
docs = loader.load()
previous
Psychic
next
Reddit
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/readthedocs_documentation.html
|
00d76196ec52-0
|
.ipynb
.pdf
TOML
TOML#
TOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for โTomโs Obvious, Minimal Languageโ referring to its creator, Tom Preston-Werner.
If you need to load Toml files, use the TomlLoader.
from langchain.document_loaders import TomlLoader
loader = TomlLoader('example_data/fake_rule.toml')
rule = loader.load()
rule
[Document(page_content='{"internal": {"creation_date": "2023-05-01", "updated_date": "2022-05-01", "release": ["release_type"], "min_endpoint_version": "some_semantic_version", "os_list": ["operating_system_list"]}, "rule": {"uuid": "some_uuid", "name": "Fake Rule Name", "description": "Fake description of rule", "query": "process where process.name : \\"somequery\\"\\n", "threat": [{"framework": "MITRE ATT&CK", "tactic": {"name": "Execution", "id": "TA0002", "reference": "https://attack.mitre.org/tactics/TA0002/"}}]}}', metadata={'source': 'example_data/fake_rule.toml'})]
previous
Telegram
next
Unstructured File
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/toml.html
|
768952470a51-0
|
.ipynb
.pdf
Blackboard
Blackboard#
Blackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings
This covers how to load data from a Blackboard Learn instance.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserโs developer tools.
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
load_all_recursively=True,
)
documents = loader.load()
previous
Azure Blob Storage File
next
Blockchain
By Harrison Chase
ยฉ Copyright 2023, Harrison Chase.
Last updated on May 28, 2023.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blackboard.html
|
7b26fc666a52-0
|
.ipynb
.pdf
Docugami
Contents
Prerequisites
Load Documents
Basic Use: Docugami Loader for Document QA
Using Docugami to Add Metadata to Chunks for High Accuracy Document QA
Docugami#
This notebook covers how to load documents from Docugami. See here for more details, and the advantages of using this system over alternative data loaders.
Prerequisites#
Follow the Quick Start section in this document
Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable
Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api
# You need the lxml package to use the DocugamiLoader
!poetry run pip -q install lxml
import os
from langchain.document_loaders import DocugamiLoader
Load Documents#
If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the access_token parameter.
DOCUGAMI_API_KEY=os.environ.get('DOCUGAMI_API_KEY')
# To load all docs in the given docset ID, just don't provide document_ids
loader = DocugamiLoader(docset_id="ecxqpipcoe2p", document_ids=["43rj0ds7s0ur"])
docs = loader.load()
docs
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-1
|
docs = loader.load()
docs
[Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this โ Agreement โ) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}),
Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the โPurposeโ). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-2
|
Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}),
Document(page_content='1. Confidential Information . For purposes of this Agreement , โ Confidential Information โ means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked โconfidentialโ or โproprietaryโ at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as โconfidentialโ or โproprietaryโ at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-3
|
Document(page_content="2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other partyโs Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other partyโs Confidential Information as those set forth in this Agreement .", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-4
|
Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}),
Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-5
|
Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),
Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-6
|
Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other partyโs Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-7
|
Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing partyโs request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing partyโs Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing partyโs Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-8
|
Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}),
Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY โAS IS โ.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-9
|
Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}),
Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing partyโs Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-10
|
Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-11
|
Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other partyโs prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-12
|
Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}),
Document(page_content='DOCUGAMI INC . : \n\n Caleb Divine : \n\n Signature: Signature: Name: \n\n Jean Paoli Name: Title: \n\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]
The metadata for each Document (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:
id and name: ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.
xpath: XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.
structure: Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller.
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-13
|
tag: Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarks
Basic Use: Docugami Loader for Document QA#
You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. this one. We can just use the same code, but use the DocugamiLoader for better chunking, instead of loading text or PDF files directly with basic splitting techniques.
!poetry run pip -q install openai tiktoken chromadb
from langchain.schema import Document
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
# For this example, we already have a processed docset for a set of lease documents
loader = DocugamiLoader(docset_id="wh2kned25uqm")
documents = loader.load()
The documents returned by the loader are already split, so we donโt need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.
We will just use the output of the DocugamiLoader as-is to set up a retrieval QA chain the usual way.
embedding = OpenAIEmbeddings()
vectordb = Chroma.from_documents(documents=documents, embedding=embedding)
retriever = vectordb.as_retriever()
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True
)
Using embedded DuckDB without persistence: data will be transient
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-14
|
)
Using embedded DuckDB without persistence: data will be transient
# Try out the retriever with an example query
qa_chain("What can tenants do with signage on their properties?")
{'query': 'What can tenants do with signage on their properties?',
'result': ' Tenants may place signs (digital or otherwise) or other form of identification on the premises after receiving written permission from the landlord which shall not be unreasonably withheld. The tenant is responsible for any damage caused to the premises and must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also remove and clean any window or glass identification promptly upon vacating the premises.',
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-15
|
'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant โs erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant โs expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-16
|
Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant โs erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant โs expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \n\n ARTICLE VII UTILITIES 7.01', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOFFICELEASEAGREEMENTThis/docset:ArticleIBasic/docset:ArticleIiiUseAndCareOf/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:NoOtherPurposes/docset:TenantsResponsibility/dg:chunk', 'id': 'g2fvhekmltza', 'name': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'Landlord': 'GLORY ROAD LLC', 'Tenant': 'Truetone Lane LLC'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
7b26fc666a52-17
|
Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a "FOR RENT " or "FOR LEASE" sign (not exceeding 8.5 โ x 11 โ) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord', 'id': 'omvs4mysdk6b', 'name': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'Landlord': 'BIRCH STREET , LLC', 'Tenant': 'Trutone Lane LLC'}),
|
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.