id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
e6064a20ba2e-98 | (langchain.llms.HuggingFaceTextGenInference attribute)
(langchain.llms.HumanInputLLM attribute)
(langchain.llms.LlamaCpp attribute)
(langchain.llms.Modal attribute)
(langchain.llms.MosaicML attribute)
(langchain.llms.NLPCloud attribute)
(langchain.llms.OpenAI attribute)
(langchain.llms.OpenAIChat attribute)
(langchain.llms.OpenLM attribute)
(langchain.llms.Petals attribute)
(langchain.llms.PipelineAI attribute)
(langchain.llms.PredictionGuard attribute)
(langchain.llms.Replicate attribute)
(langchain.llms.RWKV attribute)
(langchain.llms.SagemakerEndpoint attribute)
(langchain.llms.SelfHostedHuggingFaceLLM attribute)
(langchain.llms.SelfHostedPipeline attribute)
(langchain.llms.StochasticAI attribute)
(langchain.llms.VertexAI attribute)
(langchain.llms.Writer attribute)
(langchain.retrievers.SelfQueryRetriever attribute)
(langchain.tools.BaseTool attribute)
(langchain.tools.Tool attribute)
VespaRetriever (class in langchain.retrievers)
video_ids (langchain.document_loaders.GoogleApiYoutubeLoader attribute)
visible_only (langchain.tools.ClickTool attribute)
vocab_only (langchain.embeddings.LlamaCppEmbeddings attribute)
(langchain.llms.GPT4All attribute)
(langchain.llms.LlamaCpp attribute)
W
wait_for_processing() (langchain.document_loaders.MathpixPDFLoader method)
WeatherDataLoader (class in langchain.document_loaders)
Weaviate (class in langchain.vectorstores)
WeaviateHybridSearchRetriever (class in langchain.retrievers) | https://python.langchain.com/en/latest/genindex.html |
e6064a20ba2e-99 | WeaviateHybridSearchRetriever (class in langchain.retrievers)
WeaviateHybridSearchRetriever.Config (class in langchain.retrievers)
web_path (langchain.document_loaders.WebBaseLoader property)
web_paths (langchain.document_loaders.WebBaseLoader attribute)
WebBaseLoader (class in langchain.document_loaders)
WhatsAppChatLoader (class in langchain.document_loaders)
Wikipedia (class in langchain.docstore)
WikipediaLoader (class in langchain.document_loaders)
wolfram_alpha_appid (langchain.utilities.WolframAlphaAPIWrapper attribute)
writer_api_key (langchain.llms.Writer attribute)
writer_org_id (langchain.llms.Writer attribute)
Y
YoutubeLoader (class in langchain.document_loaders)
Z
zapier_description (langchain.tools.ZapierNLARunAction attribute)
ZepRetriever (class in langchain.retrievers)
ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute)
Zilliz (class in langchain.vectorstores)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/genindex.html |
0e78c5395d44-0 | .rst
.pdf
API References
API References#
Full documentation on all methods, classes, and APIs in LangChain.
Models
Prompts
Indexes
Memory
Chains
Agents
Utilities
Experimental Modules
previous
Installation
next
Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/reference.html |
717c142a9f7e-0 | .rst
.pdf
Welcome to LangChain
Contents
Getting Started
Modules
Use Cases
Reference Docs
Ecosystem
Additional Resources
Welcome to LangChain#
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:
Data-aware: connect a language model to other sources of data
Agentic: allow a language model to interact with its environment
The LangChain framework is designed around these principles.
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here.
Getting Started#
How to get started using LangChain to create an Language Model application.
Quickstart Guide
Concepts and terminology.
Concepts and terminology
Tutorials created by community experts and presented on YouTube.
Tutorials
Modules#
These modules are the core abstractions which we view as the building blocks of any LLM-powered application.
For each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use.
The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.
The modules are (from least to most complex):
Models: Supported model types and integrations.
Prompts: Prompt management, optimization, and serialization.
Memory: Memory refers to state that is persisted between calls of a chain/agent.
Indexes: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.
Chains: Chains are structured sequences of calls (to an LLM or to a different utility). | https://python.langchain.com/en/latest/index.html |
717c142a9f7e-1 | Agents: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.
Callbacks: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.
Use Cases#
Best practices and built-in implementations for common LangChain use cases:
Autonomous Agents: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
Agent Simulations: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.
Personal Assistants: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
Question Answering: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
Chatbots: Language models love to chat, making this a very natural use of them.
Querying Tabular Data: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).
Code Understanding: Recommended reading if you want to use language models to analyze code.
Interacting with APIs: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.
Extraction: Extract structured information from text.
Summarization: Compressing longer documents. A type of Data-Augmented Generation.
Evaluation: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.
Reference Docs# | https://python.langchain.com/en/latest/index.html |
717c142a9f7e-2 | Reference Docs#
Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
LangChain Installation
Reference Documentation
Ecosystem#
LangChain integrates a lot of different LLMs, systems, and products.
From the other side, many systems and products depend on LangChain.
It creates a vibrant and thriving ecosystem.
Integrations: Guides for how other products can be used with LangChain.
Dependents: List of repositories that use LangChain.
Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
Additional Resources#
Additional resources we think may be useful as you develop your application!
LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents.
Gallery: A collection of great projects that use Langchain, compiled by the folks at Kyrolabs. Useful for finding inspiration and example implementations.
Deploying LLMs in Production: A collection of best practices and tutorials for deploying LLMs in production.
Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents.
Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
Discord: Join us on our Discord to discuss all things LangChain!
YouTube: A collection of the LangChain tutorials and videos.
Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel.
next
Quickstart Guide
Contents
Getting Started
Modules
Use Cases
Reference Docs
Ecosystem
Additional Resources
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/index.html |
717c142a9f7e-3 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/index.html |
bb1ad3452c5b-0 | Search
Error
Please activate JavaScript to enable the search functionality.
Ctrl+K
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/search.html |
7bbfc33f9006-0 | .md
.pdf
Dependents
Dependents#
Dependents stats for hwchase17/langchain
[update: 2023-06-05; only dependent repositories with Stars > 100]
Repository
Stars
openai/openai-cookbook
38024
LAION-AI/Open-Assistant
33609
microsoft/TaskMatrix
33136
hpcaitech/ColossalAI
30032
imartinez/privateGPT
28094
reworkd/AgentGPT
23430
openai/chatgpt-retrieval-plugin
17942
jerryjliu/llama_index
16697
mindsdb/mindsdb
16410
mlflow/mlflow
14517
GaiZhenbiao/ChuanhuChatGPT
10793
databrickslabs/dolly
10155
openai/evals
10076
AIGC-Audio/AudioGPT
8619
logspace-ai/langflow
8211
imClumsyPanda/langchain-ChatGLM
8154
PromtEngineer/localGPT
6853
StanGirard/quivr
6830
PipedreamHQ/pipedream
6520
go-skynet/LocalAI
6018
arc53/DocsGPT
5643
e2b-dev/e2b
5075
langgenius/dify
4281
nsarrazin/serge
4228
zauberzeug/nicegui
4084
madawei2699/myGPTReader
4039
wenda-LLM/wenda
3871
GreyDGL/PentestGPT
3837
zilliztech/GPTCache
3625
csunny/DB-GPT
3545
gkamradt/langchain-tutorials
3404 | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-1 | 3545
gkamradt/langchain-tutorials
3404
mmabrouk/chatgpt-wrapper
3303
postgresml/postgresml
3052
marqo-ai/marqo
3014
MineDojo/Voyager
2945
PrefectHQ/marvin
2761
project-baize/baize-chatbot
2673
hwchase17/chat-langchain
2589
whitead/paper-qa
2572
Azure-Samples/azure-search-openai-demo
2366
GerevAI/gerev
2330
OpenGVLab/InternGPT
2289
ParisNeo/gpt4all-ui
2159
OpenBMB/BMTools
2158
guangzhengli/ChatFiles
2005
h2oai/h2ogpt
1939
Farama-Foundation/PettingZoo
1845
OpenGVLab/Ask-Anything
1749
IntelligenzaArtificiale/Free-Auto-GPT
1740
Unstructured-IO/unstructured
1628
hwchase17/notion-qa
1607
NVIDIA/NeMo-Guardrails
1544
SamurAIGPT/privateGPT
1543
paulpierre/RasaGPT
1526
yanqiangmiffy/Chinese-LangChain
1485
Kav-K/GPTDiscord
1402
vocodedev/vocode-python
1387
Chainlit/chainlit
1336
lunasec-io/lunasec
1323
psychic-api/psychic
1248
agiresearch/OpenAGI
1208
jina-ai/thinkgpt
1193
thomas-yanxin/LangChain-ChatGLM-Webui
1182 | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-2 | thomas-yanxin/LangChain-ChatGLM-Webui
1182
ttengwang/Caption-Anything
1137
jina-ai/dev-gpt
1135
greshake/llm-security
1086
keephq/keep
1063
juncongmoo/chatllama
1037
richardyc/Chrome-GPT
1035
visual-openllm/visual-openllm
997
mmz-001/knowledge_gpt
995
jina-ai/langchain-serve
949
irgolic/AutoPR
936
microsoft/X-Decoder
908
poe-platform/api-bot-tutorial
902
peterw/Chat-with-Github-Repo
875
cirediatpl/FigmaChain
822
homanp/superagent
806
seanpixel/Teenage-AGI
800
chatarena/chatarena
796
hashintel/hash
795
SamurAIGPT/Camel-AutoGPT
786
rlancemartin/auto-evaluator
770
corca-ai/EVAL
769
101dotxyz/GPTeam
755
noahshinn024/reflexion
706
eyurtsev/kor
695
cheshire-cat-ai/core
681
e-johnstonn/BriefGPT
656
run-llama/llama-lab
635
griptape-ai/griptape
583
namuan/dr-doc-search
555
getmetal/motorhead
550
kreneskyp/ix
543
hwchase17/chat-your-data
510
Anil-matcha/ChatPDF
501
whyiyhw/chatgpt-wechat
497
SamurAIGPT/ChatGPT-Developer-Plugins
496
microsoft/PodcastCopilot
492
debanjum/khoj | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-3 | 496
microsoft/PodcastCopilot
492
debanjum/khoj
485
akshata29/chatpdf
485
langchain-ai/langchain-aiplugin
462
jina-ai/agentchain
460
alexanderatallah/window.ai
457
yeagerai/yeagerai-agent
451
mckaywrigley/repo-chat
446
michaelthwan/searchGPT
446
mpaepper/content-chatbot
441
freddyaboulton/gradio-tools
439
ruoccofabrizio/azure-open-ai-embeddings-qna
429
StevenGrove/GPT4Tools
422
jonra1993/fastapi-alembic-sqlmodel-async
407
msoedov/langcorn
405
amosjyng/langchain-visualizer
395
ajndkr/lanarky
384
mtenenholtz/chat-twitter
376
steamship-core/steamship-langchain
371
langchain-ai/auto-evaluator
365
xuwenhao/geektime-ai-course
358
continuum-llms/chatgpt-memory
357
opentensor/bittensor
347
showlab/VLog
345
daodao97/chatdoc
345
logan-markewich/llama_index_starter_pack
332
poe-platform/poe-protocol
320
explosion/spacy-llm
312
andylokandy/gpt-4-search
311
alejandro-ao/langchain-ask-pdf
310
jupyterlab/jupyter-ai
294
BlackHC/llm-strategy
283
itamargol/openai
281
momegas/megabots
279
personoids/personoids-lite
277
yvann-hub/Robby-chatbot
267
Anil-matcha/Website-to-Chatbot | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-4 | 267
Anil-matcha/Website-to-Chatbot
266
Cheems-Seminar/grounded-segment-any-parts
260
sullivan-sean/chat-langchainjs
248
bborn/howdoi.ai
245
daveebbelaar/langchain-experiments
240
MagnivOrg/prompt-layer-library
237
ur-whitelab/exmol
234
conceptofmind/toolformer
234
recalign/RecAlign
226
OpenBMB/AgentVerse
220
alvarosevilla95/autolang
219
JohnSnowLabs/nlptest
216
kaleido-lab/dolphin
215
truera/trulens
208
NimbleBoxAI/ChainFury
208
airobotlab/KoChatGPT
207
monarch-initiative/ontogpt
200
paolorechia/learn-langchain
195
shaman-ai/agent-actors
185
Haste171/langchain-chatbot
184
plchld/InsightFlow
182
su77ungr/CASALIOY
180
jbrukh/gpt-jargon
177
benthecoder/ClassGPT
174
billxbf/ReWOO
170
filip-michalsky/SalesGPT
168
hwchase17/langchain-streamlit-template
168
radi-cho/datasetGPT
164
hardbyte/qabot
164
gia-guar/JARVIS-ChatGPT
158
plastic-labs/tutor-gpt
154
yasyf/compress-gpt
154
fengyuli-dev/multimedia-gpt
154
ethanyanjiali/minChatGPT
153
hwchase17/chroma-langchain
153
edreisMD/plugnplai
148
chakkaradeep/pyCodeAGI
145 | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-5 | 148
chakkaradeep/pyCodeAGI
145
ccurme/yolopandas
145
shamspias/customizable-gpt-chatbot
144
realminchoi/babyagi-ui
143
PradipNichite/Youtube-Tutorials
140
gustavz/DataChad
140
Klingefjord/chatgpt-telegram
140
Jaseci-Labs/jaseci
139
handrew/browserpilot
137
jmpaz/promptlib
137
SamPink/dev-gpt
135
menloparklab/langchain-cohere-qdrant-doc-retrieval
135
hirokidaichi/wanna
135
steamship-core/vercel-examples
134
pablomarin/GPT-Azure-Search-Engine
133
ibiscp/LLM-IMDB
133
shauryr/S2QA
133
jerlendds/osintbuddy
132
yuanjie-ai/ChatLLM
132
yasyf/summ
132
WongSaang/chatgpt-ui-server
130
peterw/StoryStorm
127
Teahouse-Studios/akari-bot
126
vaibkumr/prompt-optimizer
125
preset-io/promptimize
124
homanp/vercel-langchain
124
petehunt/langchain-github-bot
123
eunomia-bpf/GPTtrace
118
nicknochnack/LangchainDocuments
116
jiran214/GPT-vup
112
rsaryev/talk-codebase
112
zenml-io/zenml-projects
112
microsoft/azure-openai-in-a-day-workshop
112
davila7/file-gpt
112
prof-frink-lab/slangchain
111
aurelio-labs/arxiv-bot
110 | https://python.langchain.com/en/latest/dependents.html |
7bbfc33f9006-6 | 111
aurelio-labs/arxiv-bot
110
fixie-ai/fixie-examples
108
miaoshouai/miaoshouai-assistant
105
flurb18/AgentOoba
103
solana-labs/chatgpt-plugin
102
Significant-Gravitas/Auto-GPT-Benchmarks
102
kaarthik108/snowChat
100
Generated by github-dependents-info
github-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars
previous
Zilliz
next
Deployments
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/dependents.html |
4cb7cd29e0ef-0 | .rst
.pdf
Indexes
Contents
Index Types
Indexes#
Note
Conceptual Guide
Indexes refer to ways to structure documents so that LLMs can best interact with them.
The most common way that indexes are used in chains is in a “retrieval” step.
This step refers to taking a user’s query and returning the most relevant documents.
We draw this distinction because (1) an index can be used for other things besides retrieval, and
(2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a Retriever interface - this is the interface that most chains work with.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving
unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case
sections for links to relevant functionality.
Getting Started: An overview of the indexes.
Index Types#
Document Loaders: How to load documents from a variety of sources.
Text Splitters: An overview and different types of the Text Splitters.
VectorStores: An overview and different types of the Vector Stores.
Retrievers: An overview and different types of the Retrievers.
previous
Zep Memory
next
Getting Started
Contents
Index Types
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes.html |
adab1df1cd9d-0 | .rst
.pdf
Prompts
Prompts#
Note
Conceptual Guide
The new way of programming models is through prompts.
A prompt refers to the input to the model.
This input is often constructed from multiple components.
A PromptTemplate is responsible for the construction of this input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
Getting Started: An overview of the prompts.
LLM Prompt Templates: How to use PromptTemplates to prompt Language Models.
Chat Prompt Templates: How to use PromptTemplates to prompt Chat Models.
Example Selectors: Often times it is useful to include examples in prompts.
These examples can be dynamically selected. This section goes over example selection.
Output Parsers: Language models (and Chat Models) output text.
But many times you may want to get more structured information. This is where output parsers come in.
Output Parsers:
instruct the model how output should be formatted,
parse output into the desired formatting (including retrying if necessary).
previous
Tensorflow Hub
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/prompts.html |
d7ecaa7adc63-0 | .rst
.pdf
Models
Contents
Model Types
Models#
Note
Conceptual Guide
This section of the documentation deals with different types of models that are used in LangChain.
On this page we will go over the model types at a high level,
but we have individual pages for each model type.
The pages contain more detailed “how-to” guides for working with that model,
as well as a list of different model providers.
Getting Started: An overview of the models.
Model Types#
LLMs: Large Language Models (LLMs) take a text string as input and return a text string as output.
Chat Models: Chat Models are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
Text Embedding Models: Text embedding models take text as input and return a list of floats.
previous
Tutorials
next
Getting Started
Contents
Model Types
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/models.html |
f6eba871283c-0 | .rst
.pdf
Memory
Memory#
Note
Conceptual Guide
By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (as are the underlying LLMs and chat models).
In some applications (chatbots being a GREAT example) it is highly important
to remember previous interactions, both at a short term but also at a long term level.
The Memory does exactly that.
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
Getting Started: An overview of different types of memory.
How-To Guides: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.
previous
Structured Output Parser
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/memory.html |
53104f14d950-0 | .rst
.pdf
Agents
Contents
Action Agents
Plan-and-Execute Agents
Agents#
Note
Conceptual Guide
Some applications require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user’s input.
In these types of chains, there is an agent which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
At the moment, there are two main types of agents:
Action Agents: these agents decide the actions to take and execute that actions one action at a time.
Plan-and-Execute Agents: these agents first decide a plan of actions to take, and then execute those actions one at a time.
When should you use each one? Action Agents are more conventional, and good for small tasks.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus.
However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge
of the execution for the Plan and Execute agent.
Action Agents#
High level pseudocode of the Action Agents:
The user input is received
The agent decides which tool - if any - to use, and what the tool input should be
That tool is then called with the tool input, and an observation is recorded (the output of this calling)
That history of tool, tool input, and observation is passed back into the agent, and it decides the next step
This is repeated until the agent decides it no longer needs to use a tool, and then it responds directly to the user.
The different abstractions involved in agents are: | https://python.langchain.com/en/latest/modules/agents.html |
53104f14d950-1 | The different abstractions involved in agents are:
Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input
along with a list of previous steps the agent has taken, and returns either an AgentAction or AgentFinish
AgentAction corresponds to the tool to use and the input to that tool
AgentFinish means the agent is done, and has information around what to return to the user
Tools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
Toolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to
interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.
Agent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent
iteratively until the stopping criteria is met.
Getting Started: An overview of agents. It covers how to use all things related to agents in an end-to-end manner.
Agent Construction:
Although an agent can be constructed in many way, the typical way to construct an agent is with:
PromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt
to send to the language model
Language Model: this takes the prompt constructed by the PromptTemplate and returns some output
Output Parser: this takes the output of the Language Model and parses it into an AgentAction or AgentFinish object.
Additional Documentation:
Tools: Different types of tools LangChain supports natively. We also cover how to add your own tools.
Agents: Different types of agents LangChain supports natively. We also cover how to
modify and create your own agents.
Toolkits: Various toolkits that LangChain supports out of the box, and how to
create an agent from them. | https://python.langchain.com/en/latest/modules/agents.html |
53104f14d950-2 | create an agent from them.
Agent Executor: The Agent Executor class, which is responsible for calling
the agent and tools in a loop. We go over different ways to customize this, and options you can use for more control.
Plan-and-Execute Agents#
High level pseudocode of the Plan-and-Execute Agents:
The user input is received
The planner lists out the steps to take
The executor goes through the list of steps, executing them
The most typical implementation is to have the planner be a language model, and the executor be an action agent.
Plan-and-Execute Agents
previous
Chains
next
Getting Started
Contents
Action Agents
Plan-and-Execute Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/agents.html |
0283ba0ad99f-0 | .rst
.pdf
Chains
Chains#
Note
Conceptual Guide
Using an LLM in isolation is fine for some simple applications,
but more complex applications require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as several common implementations of chains.
Getting Started: An overview of chains.
How-To Guides: How-to guides about various types of chains.
Reference: API reference documentation for all Chain classes.
previous
Zep
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/chains.html |
4f7fcc38b6e5-0 | .rst
.pdf
Document Loaders
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
Document Loaders#
Note
Conceptual Guide
Combining language models with your own text data is a powerful way to differentiate them.
The first step in doing this is to load the data into “Documents” - a fancy way of say some pieces of text.
The document loader is aimed at making this easy.
The following document loaders are provided:
Transform loaders#
These transform loaders transform data from a specific format into the Document format.
For example, there are transformers for CSV and SQL.
Mostly, these loaders input data from files but sometime from URLs.
A primary driver of a lot of these transformers is the Unstructured python package.
This package transforms many types of files - text, powerpoint, images, html, pdf, etc - into text data.
For detailed instructions on how to get set up with Unstructured, see installation guidelines here.
OpenAIWhisperParser
CoNLL-U
Copy Paste
CSV
Email
EPub
EverNote
Microsoft Excel
Facebook Chat
File Directory
HTML
Images
Jupyter Notebook
JSON
Markdown
Microsoft PowerPoint
Microsoft Word
Open Document Format (ODT)
Pandas DataFrame
PDF
Sitemap
Subtitle
Telegram
TOML
Unstructured File
URL
Selenium URL Loader
Playwright URL Loader
WebBaseLoader
Weather
WhatsApp Chat
Public dataset or service loaders#
These datasets and sources are created for public domain and we use queries to search there
and download necessary documents.
For example, Hacker News service.
We don’t need any access permissions to these datasets and services.
Arxiv
AZLyrics
BiliBili
College Confidential
Gutenberg
Hacker News
HuggingFace dataset
iFixit
IMSDb | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
4f7fcc38b6e5-1 | Gutenberg
Hacker News
HuggingFace dataset
iFixit
IMSDb
MediaWikiDump
Wikipedia
YouTube transcripts
Proprietary dataset or service loaders#
These datasets and services are not from the public domain.
These loaders mostly transform data from specific formats of applications or cloud services,
for example Google Drive.
We need access tokens and sometime other parameters to get access to these datasets and services.
Airbyte JSON
Apify Dataset
AWS S3 Directory
AWS S3 File
Azure Blob Storage Container
Azure Blob Storage File
Blackboard
Blockchain
ChatGPT Data
Confluence
Examples
Diffbot
Docugami
DuckDB
Figma
GitBook
Git
Google BigQuery
Google Cloud Storage Directory
Google Cloud Storage File
Google Drive
Image captions
Iugu
Joplin
Microsoft OneDrive
Modern Treasury
Notion DB 2/2
Notion DB 1/2
Obsidian
Psychic
PySpark DataFrame Loader
ReadTheDocs Documentation
Reddit
Roam
Slack
Spreedly
Stripe
2Markdown
Twitter
previous
Getting Started
next
OpenAIWhisperParser
Contents
Transform loaders
Public dataset or service loaders
Proprietary dataset or service loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders.html |
99f4bf3cc07b-0 | .rst
.pdf
Vectorstores
Vectorstores#
Note
Conceptual Guide
Vectorstores are one of the most important components of building indexes.
For an introduction to vectorstores and generic functionality see:
Getting Started
We also have documentation for all the types of vectorstores that are supported.
Please see below for that list.
AnalyticDB
Annoy
Atlas
Chroma
ClickHouse Vector Search
Deep Lake
DocArrayHnswSearch
DocArrayInMemorySearch
ElasticSearch
ElasticVectorSearch class
ElasticKnnSearch Class
FAISS
LanceDB
MatchingEngine
Milvus
Commented out until further notice
MyScale
OpenSearch
PGVector
Pinecone
Qdrant
Redis
SKLearnVectorStore
Supabase (Postgres)
Tair
Tigris
Typesense
Vectara
Weaviate
Persistance
Retriever options
Zilliz
previous
tiktoken (OpenAI) tokenizer
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/vectorstores.html |
82454f06f308-0 | .rst
.pdf
Text Splitters
Text Splitters#
Note
Conceptual Guide
When you want to deal with long pieces of text, it is necessary to split up that text into chunks.
As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What “semantically related” means could depend on the type of text.
This notebook showcases several ways to do that.
At a high level, text splitters work as following:
Split the text up into small, semantically meaningful chunks (often sentences).
Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
How the text is split
How the chunk size is measured
For an introduction to the default text splitter and generic functionality see:
Getting Started
Usage examples for the text splitters:
Character
Code (including HTML, Markdown, Latex, Python, etc)
NLTK
Recursive Character
spaCy
tiktoken (OpenAI)
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters.
In order to get a more accurate estimate, we can use tokenizers to count the number of tokens in the text.
We use this number inside the ..TextSplitter classes.
This implemented as the from_<tokenizer> methods of the ..TextSplitter classes:
Hugging Face tokenizer
tiktoken (OpenAI) tokenizer
previous
Twitter
next
Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/modules/indexes/text_splitters.html |
82454f06f308-1 | Getting Started
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters.html |
ecaa0fedccc1-0 | .ipynb
.pdf
Getting Started
Contents
One Line Index Creation
Walkthrough
Getting Started#
LangChain primarily focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows:
from abc import ABC, abstractmethod
from typing import List
from langchain.schema import Document
class BaseRetriever(ABC):
@abstractmethod
def get_relevant_documents(self, query: str) -> List[Document]:
"""Get texts relevant for a query.
Args:
query: string to find relevant texts for
Returns:
List of relevant documents
"""
It’s that simple! The get_relevant_documents method can be implemented however you see fit.
Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide.
In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that.
By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb.
pip install chromadb
This example showcases question answering over documents.
We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain.
Question answering over documents consists of four steps:
Create an index
Create a Retriever from that index
Create a question answering chain
Ask questions! | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
ecaa0fedccc1-1 | Create a Retriever from that index
Create a question answering chain
Ask questions!
Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on.
First, let’s import some common classes we’ll use no matter what.
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt', encoding='utf8')
One Line Index Creation#
To get started as quickly as possible, we can use the VectorstoreIndexCreator.
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide.
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
ecaa0fedccc1-2 | index.query_with_sources(query)
{'question': 'What did the president say about Ketanji Brown Jackson',
'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n",
'sources': '../state_of_the_union.txt'}
What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that.
index.vectorstore
<langchain.vectorstores.chroma.Chroma at 0x119aa5940>
If we then want to access the VectorstoreRetriever, we can do that with:
index.vectorstore.as_retriever()
VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={})
Walkthrough#
Okay, so what’s actually going on? How is this index getting created?
A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing?
There are three main steps going on after the documents are loaded:
Splitting documents into chunks
Creating embeddings for each document
Storing documents and embeddings in a vectorstore
Let’s walk through this in code
documents = loader.load()
Next, we will split the documents into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
We will then select which embeddings we want to use. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
ecaa0fedccc1-3 | We will then select which embeddings we want to use.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
We now create the vectorstore to use as the index.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
So that’s creating the index. Then, we expose this index in a retriever interface.
retriever = db.as_retriever()
Then, as before, we create a chain and use it to answer questions!
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever)
query = "What did the president say about Ketanji Brown Jackson"
qa.run(query)
" The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans."
VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below:
index_creator = VectorstoreIndexCreator(
vectorstore_cls=Chroma,
embedding=OpenAIEmbeddings(),
text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
) | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
ecaa0fedccc1-4 | )
Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood.
previous
Indexes
next
Document Loaders
Contents
One Line Index Creation
Walkthrough
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/getting_started.html |
8f0057f58edb-0 | .rst
.pdf
Retrievers
Retrievers#
Note
Conceptual Guide
The retriever interface is a generic interface that makes it easy to combine documents with
language models. This interface exposes a get_relevant_documents method which takes in a query
(a string) and returns a list of documents.
Please see below for a list of all the retrievers supported.
Arxiv
Azure Cognitive Search
ChatGPT Plugin
Self-querying with Chroma
Cohere Reranker
Contextual Compression
Stringing compressors and document transformers together
Databerry
ElasticSearch BM25
kNN
Metal
Pinecone Hybrid Search
PubMed Retriever
Self-querying with Qdrant
Self-querying
SVM
TF-IDF
Time Weighted VectorStore
VectorStore
Vespa
Weaviate Hybrid Search
Self-querying with Weaviate
Wikipedia
Zep
previous
Zilliz
next
Arxiv
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers.html |
6ad938c4d4c1-0 | .ipynb
.pdf
Wikipedia
Contents
Installation
Examples
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
This notebook shows how to load wiki pages from wikipedia.org into the Document format that we use downstream.
Installation#
First, you need to install wikipedia python package.
#!pip install wikipedia
Examples#
WikipediaLoader has these arguments:
query: free text which used to find documents in Wikipedia
optional lang: default=”en”. Use it to search in a specific language part of Wikipedia
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.
from langchain.document_loaders import WikipediaLoader
docs = WikipediaLoader(query='HUNTER X HUNTER', load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
docs[0].page_content[:400] # a content of the Document
previous
MediaWikiDump
next
YouTube transcripts
Contents
Installation
Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/wikipedia.html |
4442c8d0c4c6-0 | .ipynb
.pdf
Roam
Contents
🧑 Instructions for ingesting your own dataset
Roam#
ROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.
This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here.
🧑 Instructions for ingesting your own dataset#
Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export.
When exporting, make sure to select the Markdown & CSV format option.
This will produce a .zip file in your Downloads folder. Move the .zip file into this repository.
Run the following command to unzip the zip file (replace the Export... with your own file name as needed).
unzip Roam-Export-1675782732639.zip -d Roam_DB
from langchain.document_loaders import RoamLoader
loader = RoamLoader("Roam_DB")
docs = loader.load()
previous
Reddit
next
Slack
Contents
🧑 Instructions for ingesting your own dataset
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/roam.html |
3f30fc5a75e2-0 | .ipynb
.pdf
URL
Contents
URL
Selenium URL Loader
Setup
Playwright URL Loader
Setup
URL#
This covers how to load HTML documents from a list of URLs into a document format that we can use downstream.
from langchain.document_loaders import UnstructuredURLLoader
urls = [
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023",
"https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023"
]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
Selenium URL Loader#
This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.
Using selenium allows us to load pages that require JavaScript to render.
Setup#
To use the SeleniumURLLoader, you will need to install selenium and unstructured.
from langchain.document_loaders import SeleniumURLLoader
urls = [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://goo.gl/maps/NDSHwePEyaHMFGwh8"
]
loader = SeleniumURLLoader(urls=urls)
data = loader.load()
Playwright URL Loader#
This covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.
As in the Selenium case, Playwright allows us to load pages that need JavaScript to render.
Setup#
To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:
# Install playwright
!pip install "playwright"
!pip install "unstructured"
!playwright install | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html |
3f30fc5a75e2-1 | !pip install "unstructured"
!playwright install
from langchain.document_loaders import PlaywrightURLLoader
urls = [
"https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"https://goo.gl/maps/NDSHwePEyaHMFGwh8"
]
loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"])
data = loader.load()
previous
Unstructured File
next
WebBaseLoader
Contents
URL
Selenium URL Loader
Setup
Playwright URL Loader
Setup
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html |
89989f4899bb-0 | .ipynb
.pdf
Azure Blob Storage Container
Contents
Specifying a prefix
Azure Blob Storage Container#
Azure Blob Storage is Microsoft’s object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data.
Azure Blob Storage is designed for:
Serving images or documents directly to a browser.
Storing files for distributed access.
Streaming video and audio.
Writing to log files.
Storing data for backup and restore, disaster recovery, and archiving.
Storing data for analysis by an on-premises or Azure-hosted service.
This notebook covers how to load document objects from a container on Azure Blob Storage.
#!pip install azure-storage-blob
from langchain.document_loaders import AzureBlobStorageContainerLoader
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)]
Specifying a prefix#
You can also specify a prefix for more finegrained control over what files to load.
loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>")
loader.load()
[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)]
previous
AWS S3 File
next
Azure Blob Storage File
Contents | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html |
89989f4899bb-1 | previous
AWS S3 File
next
Azure Blob Storage File
Contents
Specifying a prefix
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html |
717a37cef21b-0 | .ipynb
.pdf
Modern Treasury
Modern Treasury#
Modern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.
Connect to banks and payment systems
Track transactions and balances in real-time
Automate payment operations for scale
This notebook covers how to load data from the Modern Treasury REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import ModernTreasuryLoader
from langchain.indexes import VectorstoreIndexCreator
The Modern Treasury API requires an organization ID and API key, which can be found in the Modern Treasury dashboard within developer settings.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
payment_orders Documentation
expected_payments Documentation
returns Documentation
incoming_payment_details Documentation
counterparties Documentation
internal_accounts Documentation
external_accounts Documentation
transactions Documentation
ledgers Documentation
ledger_accounts Documentation
ledger_transactions Documentation
events Documentation
invoices Documentation
modern_treasury_loader = ModernTreasuryLoader("payment_orders")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([modern_treasury_loader])
modern_treasury_doc_retriever = index.vectorstore.as_retriever()
previous
Microsoft OneDrive
next
Notion DB 2/2
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/modern_treasury.html |
234a10c2129c-0 | .ipynb
.pdf
Email
Contents
Using Unstructured
Retain Elements
Using OutlookMessageLoader
Email#
This notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.
Using Unstructured#
#!pip install unstructured
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader('example_data/fake-email.eml')
data = loader.load()
data
[Document(page_content='This is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': 'example_data/fake-email.eml'})]
Retain Elements#
Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredEmailLoader('example_data/fake-email.eml', mode="elements")
data = loader.load()
data[0]
Document(page_content='This is a test email to use for unit tests.', lookup_str='', metadata={'source': 'example_data/fake-email.eml'}, lookup_index=0)
Using OutlookMessageLoader#
#!pip install extract_msg
from langchain.document_loaders import OutlookMessageLoader
loader = OutlookMessageLoader('example_data/fake-email.msg')
data = loader.load()
data[0]
Document(page_content='This is a test email to experiment with the MS Outlook MSG Extractor\r\n\r\n\r\n-- \r\n\r\n\r\nKind regards\r\n\r\n\r\n\r\n\r\nBrian Zhou\r\n\r\n', metadata={'subject': 'Test for TIF files', 'sender': 'Brian Zhou <[email protected]>', 'date': 'Mon, 18 Nov 2013 16:26:24 +0800'}) | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/email.html |
234a10c2129c-1 | previous
CSV
next
EPub
Contents
Using Unstructured
Retain Elements
Using OutlookMessageLoader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/email.html |
3d96e4351700-0 | .ipynb
.pdf
Twitter
Twitter#
Twitter is an online social media and social networking service.
This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package.
You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract.
from langchain.document_loaders import TwitterTweetLoader
#!pip install tweepy
loader = TwitterTweetLoader.from_bearer_token(
oauth2_bearer_token="YOUR BEARER TOKEN",
twitter_users=['elonmusk'],
number_tweets=50, # Default value is 100
)
# Or load from access token and consumer keys
# loader = TwitterTweetLoader.from_secrets(
# access_token='YOUR ACCESS TOKEN',
# access_token_secret='YOUR ACCESS TOKEN SECRET',
# consumer_key='YOUR CONSUMER KEY',
# consumer_secret='YOUR CONSUMER SECRET',
# twitter_users=['elonmusk'],
# number_tweets=50,
# )
documents = loader.load()
documents[:5] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-1 | [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-2 | 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-3 | 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-4 | Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-5 | 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-6 | 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-7 | Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-8 | 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-9 | 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-10 | Document(page_content='@KanekoaTheGreat 🧐', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-11 | 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-12 | 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-13 | Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô 🏳️\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-14 | 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-15 | 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}})] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
3d96e4351700-16 | previous
2Markdown
next
Text Splitters
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/twitter.html |
5f3bfe78158c-0 | .ipynb
.pdf
Unstructured File
Contents
Retain Elements
Define a Partitioning Strategy
PDF Example
Unstructured API
Unstructured File#
This notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
# # Install package
!pip install "unstructured[local-inference]"
!pip install layoutparser[layoutmodels,tesseract]
# # Install other dependencies
# # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst
# !brew install libmagic
# !brew install poppler
# !brew install tesseract
# # If parsing xml / html documents:
# !brew install libxml2
# !brew install libxslt
# import nltk
# nltk.download('punkt')
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt")
docs = loader.load()
docs[0].page_content[:400]
'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit'
Retain Elements#
Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements")
docs = loader.load()
docs[:5] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html |
5f3bfe78158c-1 | docs = loader.load()
docs[:5]
[Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),
Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]
Define a Partitioning Strategy#
Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.
from langchain.document_loaders import UnstructuredFileLoader | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html |
5f3bfe78158c-2 | from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements")
docs = loader.load()
docs[:5]
[Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),
Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),
Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),
Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0),
Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)]
PDF Example#
Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements.
!wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../"
loader = UnstructuredFileLoader("./example_data/layout-parser-paper.pdf", mode="elements")
docs = loader.load()
docs[:5] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html |
5f3bfe78158c-3 | docs = loader.load()
docs[:5]
[Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),
Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),
Document(page_content='Allen Institute for AI [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),
Document(page_content='Brown University ruochen [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0),
Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)]
Unstructured API#
If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. Note that currently (as of 11 May 2023) the Unstructured API is open, but it will soon require an API. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally.
from langchain.document_loaders import UnstructuredAPIFileLoader
filenames = ["example_data/fake.docx", "example_data/fake-email.eml"]
loader = UnstructuredAPIFileLoader( | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html |
5f3bfe78158c-4 | loader = UnstructuredAPIFileLoader(
file_path=filenames[0],
api_key="FAKE_API_KEY",
)
docs = loader.load()
docs[0]
Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'})
You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader.
loader = UnstructuredAPIFileLoader(
file_path=filenames,
api_key="FAKE_API_KEY",
)
docs = loader.load()
docs[0]
Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']})
previous
TOML
next
URL
Contents
Retain Elements
Define a Partitioning Strategy
PDF Example
Unstructured API
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html |
781e8a391fb2-0 | .ipynb
.pdf
Blockchain
Contents
Overview
Load NFTs into Document Loader
Option 1: Ethereum Mainnet (default BlockchainType)
Option 2: Polygon Mainnet
Blockchain#
Overview#
The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain.
Initially this Loader supports:
Loading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155)
Ethereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet)
Alchemy’s getNFTsForCollection API
It can be extended if the community finds value in this loader. Specifically:
Additional APIs can be added (e.g. Tranction-related APIs)
This Document Loader Requires:
A free Alchemy API Key
The output takes the following format:
pageContent= Individual NFT
metadata={‘source’: ‘0x1a92f7381b9f03921564a437210bb9396471050c’, ‘blockchain’: ‘eth-mainnet’, ‘tokenId’: ‘0x15’})
Load NFTs into Document Loader#
# get ALCHEMY_API_KEY from https://www.alchemy.com/
alchemyApiKey = "..."
Option 1: Ethereum Mainnet (default BlockchainType)#
from langchain.document_loaders.blockchain import BlockchainDocumentLoader, BlockchainType
contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract address
blockchainType = BlockchainType.ETH_MAINNET #default value, optional parameter
blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress,
api_key=alchemyApiKey)
nfts = blockchainLoader.load()
nfts[:2] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html |
781e8a391fb2-1 | nfts = blockchainLoader.load()
nfts[:2]
Option 2: Polygon Mainnet#
contractAddress = "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address
blockchainType = BlockchainType.POLYGON_MAINNET
blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress,
blockchainType=blockchainType,
api_key=alchemyApiKey)
nfts = blockchainLoader.load()
nfts[:2]
previous
Blackboard
next
ChatGPT Data
Contents
Overview
Load NFTs into Document Loader
Option 1: Ethereum Mainnet (default BlockchainType)
Option 2: Polygon Mainnet
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html |
f36d696e1098-0 | .ipynb
.pdf
Pandas DataFrame
Pandas DataFrame#
This notebook goes over how to load data from a pandas DataFrame.
#!pip install pandas
import pandas as pd
df = pd.read_csv('example_data/mlb_teams_2012.csv')
df.head()
Team
"Payroll (millions)"
"Wins"
0
Nationals
81.34
98
1
Reds
82.20
97
2
Yankees
197.96
95
3
Giants
117.62
94
4
Braves
83.31
94
from langchain.document_loaders import DataFrameLoader
loader = DataFrameLoader(df, page_content_column="Team")
loader.load()
[Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}),
Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}),
Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}),
Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}),
Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}),
Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}),
Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html |
f36d696e1098-1 | Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}),
Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}),
Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}),
Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}),
Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}),
Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}),
Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}),
Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}),
Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}),
Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}),
Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}),
Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html |
f36d696e1098-2 | Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}),
Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}),
Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}),
Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}),
Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}),
Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}),
Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}),
Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}),
Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}),
Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}),
Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})]
previous
Open Document Format (ODT)
next
PDF
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html |
cf4d76d2dc17-0 | .ipynb
.pdf
OpenAIWhisperParser
OpenAIWhisperParser#
This notebook goes over how to load data from an audio file, such as an mp3.
We use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text.
Note: You will need to have an OPENAI_API_KEY supplied.
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import OpenAIWhisperParser
# Directory contains audio for the first 20 minutes of one Andrej Karpathy video
# "The spelled-out intro to neural networks and backpropagation: building micrograd"
# https://www.youtube.com/watch?v=VMj-3S1tku0
audio_file_path = "example_data/"
loader = GenericLoader.from_filesystem(audio_file_path, glob="*.mp3", parser=OpenAIWhisperParser())
docs = loader.load()
docs | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-1 | [Document(page_content="Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient and really what it does is it implements back propagation. Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of some kind of a loss function with respect to the weights of a neural network and what that allows us to do then is we can iteratively tune the weights of that neural network to minimize the loss function and therefore improve the accuracy of the network. So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or JAX. So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here you'll see that micrograd basically allows you to build out mathematical expressions and here what we are doing is we have an expression that we're building out where you have two inputs a and b and you'll see that a and b are negative four and two but we are wrapping those values into | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-2 | and you'll see that a and b are negative four and two but we are wrapping those values into this value object that we are going to build out as part of micrograd. So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where a and b are transformed into c d and eventually e f and g and I'm showing some of the functionality of micrograd and the operations that it supports. So you can add two value objects, you can multiply them, you can raise them to a constant power, you can offset by one, negate, squash at zero, square, divide by constant, divide by it, etc. And so we're building out an expression graph with these two inputs a and b and we're creating an output value of g and micrograd will in the background build out this entire mathematical expression. So it will for example know that c is also a value, c was a result of an addition operation and the child nodes of c are a and b because the and it will maintain pointers to a and b value objects. So we'll basically know exactly how all of this is laid out and then not only can we do what we call the forward pass where we actually look at the value of g of course, that's pretty straightforward, we will access that using the dot data attribute and so the output of the forward pass, the value of g, is 24.7 it turns out. But the big deal is that we can also take this g value object and we can call dot backward and this will basically initialize backpropagation at the node g. And what backpropagation is going to do is it's going to start at g and it's going to go backwards through that expression graph and it's going to recursively apply the chain rule from calculus. And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes like e, d, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-3 | going to evaluate basically the derivative of g with respect to all the internal nodes like e, d, and c but also with respect to the inputs a and b. And then we can actually query this derivative of g with respect to a, for example that's a.grad, in this case it happens to be 138, and the derivative of g with respect to b which also happens to be here 645. And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g through this mathematical expression. So in particular a.grad is 138, so if we slightly nudge a and make it slightly larger, 138 is telling us that g will grow and the slope of that growth is going to be 138 and the slope of growth of b is going to be 645. So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction. Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless. I just made it up, I'm just flexing about the kinds of operations that are supported by micrograd. What we actually really care about are neural networks but it turns out that neural networks are just mathematical expressions just like this one but actually slightly a bit less crazy even. Neural networks are just a mathematical expression, they take the input data as an input and they take the weights of a neural network as an input and it's a mathematical expression and the output are your predictions of your neural net or the loss function, we'll see this in a bit. But basically neural networks just happen to be a certain class of mathematical expressions but back propagation is actually significantly more general. It doesn't actually care about neural networks at all, it only cares about arbitrary mathematical expressions and then we happen to use that machinery for training of neural networks. Now one more note I would like to make at this stage is | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-4 | machinery for training of neural networks. Now one more note I would like to make at this stage is that as you see here micrograd is a scalar valued autograd engine so it's working on the you know level of individual scalars like negative 4 and 2 and we're taking neural nets and we're breaking them down all the way to these atoms of individual scalars and all the little pluses and times and it's just excessive and so obviously you would never be doing any of this in production. It's really just done for pedagogical reasons because it allows us to not have to deal with these n-dimensional tensors that you would use in modern deep neural network library. So this is really done so that you understand and refactor out back propagation and chain rule and understanding of neural training and then if you actually want to train bigger networks you have to be using these tensors but none of the math changes, this is done purely for efficiency. We are basically taking all the scalars all the scalar values we're packaging them up into tensors which are just arrays of these scalars and then because we have these large arrays we're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and all those operations can be done in parallel and then the whole thing runs faster but really none of the math changes and they're done purely for efficiency so I don't think that it's pedagogically useful to be dealing with tensors from scratch and I think and that's why I fundamentally wrote micrograd because you can understand how things work at the fundamental level and then you can speed it up later. Okay so here's the fun part. My claim is that micrograd is what you need to train neural networks and everything else is just efficiency so you'd think that micrograd would be a very complex piece of code and that turns out to not be the case. So if we just go to micrograd and you'll see that there's only two files here | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-5 | So if we just go to micrograd and you'll see that there's only two files here in micrograd. This is the actual engine, it doesn't know anything about neural nets and this is the entire neural nets library on top of micrograd. So engine and nn.py. So the actual back propagation autograd engine that gives you the power of neural networks is literally 100 lines of code of like very simple python which we'll understand by the end of this lecture and then nn.py, this neural network library built on top of the autograd engine is like a joke. It's like we have to define what is a neuron and then we have to define what is a layer of neurons and then we define what is a multilayer perceptron which is just a sequence of layers of neurons and so it's just a total joke. So basically there's a lot of power that comes from only 150 lines of code and that's all you need to understand to understand neural network training and everything else is just efficiency and of course there's a lot to efficiency but fundamentally that's all that's happening. Okay so now let's dive right in and implement micrograd step by step. The first thing I'd like to do is I'd like to make sure that you have a very good understanding intuitively of what a derivative is and exactly what information it gives you. So let's start with some basic imports that I copy-paste in every jupyter notebook always and let's define a function, a scalar valued function f of x as follows. So I just made this up randomly. I just wanted a scalar valued function that takes a single scalar x and returns a single scalar y and we can call this function of course so we can pass in say 3.0 and get 20 back. Now we can also plot this function to get a sense of its shape. You can tell from the mathematical expression that this is probably a parabola, it's a quadratic and | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-6 | can tell from the mathematical expression that this is probably a parabola, it's a quadratic and so if we just create a set of scalar values that we can feed in using for example a range from negative 5 to 5 in steps of 0.25. So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25 and we can actually call this function on this numpy array as well so we get a set of y's if we call f on x's and these y's are basically also applying the function on every one of these elements independently and we can plot this using matplotlib. So plt.plot x's and y's and we get a nice parabola. So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y-coordinate. So now I'd like to think through what is the derivative of this function at any single input point x. So what is the derivative at different points x of this function? Now if you remember back to your calculus class you've probably derived derivatives so we take this mathematical expression 3x squared minus 4x plus 5 and you would write out on a piece of paper and you would apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function and then you could plug in different texts and see what the derivative is. We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net. It would be a massive expression, it would be thousands, tens of thousands of terms. No one actually derives the derivative of course and so we're not going to take this kind of like symbolic approach. Instead what I'd like to do is I'd like to look at the definition of derivative and just make sure that we really understand what the derivative is measuring, what it's telling you about the function. And so if | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-7 | really understand what the derivative is measuring, what it's telling you about the function. And so if we just look up derivative we see that okay so this is not a very good definition of derivative. This is a definition of what it means to be differentiable but if you remember from your calculus it is the limit as h goes to zero of f of x plus h minus f of x over h. So basically what it's saying is if you slightly bump up your at some point x that you're interested in or a and if you slightly bump up you know you slightly increase it by small number h how does the function respond with what sensitivity does it respond where is the slope at that point does the function go up or does it go down and by how much and that's the slope of that function the the slope of that response at that point and so we can basically evaluate the derivative here numerically by taking a very small h of course the definition would ask us to take h to zero we're just going to pick a very small h 0.001 and let's say we're interested in 0.3.0 so we can look at f of x of course as 20 and now f of x plus h so if we slightly nudge x in a positive direction how is the function going to respond and just looking at this do you expand do you expect f of x plus h to be slightly greater than 20 or do you expect it to be slightly lower than 20 and since this 3 is here and this is 20 if we slightly go positively the function will respond positively so you'd expect this to be slightly greater than 20 and now by how much is telling you the sort of the the strength of that slope right the the size of the slope so f of x plus h minus f of x this is how much the function responded in a positive direction and we have to normalize by the run so we have the rise over run to get the slope so this | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-8 | we have to normalize by the run so we have the rise over run to get the slope so this of course is just a numerical approximation of the slope because we have to make h very very small to converge to the exact amount now if i'm doing too many zeros at some point i'm going to i'm going to get an incorrect answer because we're using floating point arithmetic and the representations of all these numbers in computer memory is finite and at some point we get into trouble so we can converge towards the right answer with this approach but basically at 3 the slope is 14 and you can see that by taking 3x squared minus 4x plus 5 and differentiating it in our head so 3x squared would be 6x minus 4 and then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct so that's at 3 now how about the slope at say negative 3 would you expect what would you expect for the slope now telling the exact value is really hard but what is the sign of that slope so at negative 3 if we slightly go in the positive direction at x the function would actually go down and so that tells you that the slope would be negative so we'll get a slight number below below 20 and so if we take the slope we expect something negative negative 22 okay and at some point here of course the slope would be zero now for this specific function i looked it up previously and it's at point uh 2 over 3 so at roughly 2 over 3 that's somewhere here this this derivative would be zero so basically at that precise point yeah at that precise point if we nudge in a positive direction the function doesn't respond this stays the same almost and so that's why the slope is zero okay now let's look at a bit more complex case so we're going to start you know complexifying a bit so now we have a function here with output variable | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-9 | going to start you know complexifying a bit so now we have a function here with output variable d that is a function of three scalar inputs a b and c so a b and c are some specific values three inputs into our expression graph and a single output d and so if we just print d we get four and now what i like to do is i'd like to again look at the derivatives of d with respect to a b and c and uh think through uh again just the intuition of what this derivative is telling us so in order to evaluate this derivative we're going to get a bit hacky here we're going to again have a very small value of h and then we're going to fix the inputs at some values that we're interested in so these are the this is the point a b c at which we're going to be evaluating the the derivative of d with respect to all a b and c at that point so there are the inputs and now we have d1 is that expression and then we're going to for example look at the derivative of d with respect to a so we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function and now we're going to print um you know f1 d1 is d1 d2 is d2 and print slope so the derivative or slope here will be um of course d2 minus d1 divide h so d2 minus d1 is how much the function increased uh when we bumped the uh the specific input that we're interested in by a tiny amount and this is the normalized by this is the normalized by h to get the slope so um yeah so this so i just run this we're going to print d1 which we know is four now d2 will be bumped a will be bumped by h so let's just think through a little bit uh what d2 will be uh printed out here in particular d1 will be four will d2 be a number slightly greater than | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-10 | uh printed out here in particular d1 will be four will d2 be a number slightly greater than four or slightly lower than four and that's going to tell us the sign of the derivative so we're bumping a by h b is minus three c is 10 so you can just intuitively think through this derivative and what it's doing a will be slightly more positive and but b is a negative number so if a is slightly more positive because b is negative three we're actually going to be adding less to d so you'd actually expect that the value of the function will go down so let's just see this yeah and so we went from four to 3.9996 and that tells you that the slope will be negative and then um will be a negative number because we went down and then the exact number of slope will be exact amount of slope is negative three and you can also convince yourself that negative three is the right answer um mathematically and analytically because if you have a times b plus c and you are you know you have calculus then uh differentiating a times b plus c with respect to a gives you just b and indeed the value of b is negative three which is the derivative that we have so you can tell that that's correct so now if we do this with b so if we bump b by a little bit in a positive direction we'd get different slopes so what is the influence of b on the output d so if we bump b by a tiny amount in a positive direction then because a is positive we'll be adding more to d right so um and now what is the what is the sensitivity what is the slope of that addition and it might not surprise you that this should be two and why is it two because d of d by db differentiating with respect to b would be would give us a and the value of a is two so that's also working well and then if c gets bumped a tiny amount in h by h then of course a times | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-11 | working well and then if c gets bumped a tiny amount in h by h then of course a times b is unaffected and now c becomes slightly bit higher what does that do to the function it makes it slightly bit higher because we're simply adding c and it makes it slightly bit higher by the exact same amount that we added to c and so that tells you that the slope is one that will be the the rate at which d will increase as we scale c okay so we now have some intuitive sense of what this derivative is telling you about the function and we'd like to move to neural networks now as i mentioned neural networks will be pretty massive expressions mathematical expressions so we need some data structures that maintain these expressions and that's what we're going to start to build out now so we're going to build out this value object that i showed you in the readme page of micrograd so let me copy paste a skeleton of the first very simple value object so class value takes a single scalar value that it wraps and keeps track of and that's it so we can for example do value of 2.0 and then we can get we can look at its content and python will internally use the wrapper function to return this string like that so this is a value object that we're going to call value object", metadata={'source': 'example_data/Lecture_1_0.mp3'})] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
cf4d76d2dc17-12 | previous
Document Loaders
next
CoNLL-U
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html |
3f8718b83473-0 | .ipynb
.pdf
Iugu
Iugu#
Iugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.
This notebook covers how to load data from the Iugu REST API into a format that can be ingested into LangChain, along with example usage for vectorization.
import os
from langchain.document_loaders import IuguLoader
from langchain.indexes import VectorstoreIndexCreator
The Iugu API requires an access token, which can be found inside of the Iugu dashboard.
This document loader also requires a resource option which defines what data you want to load.
Following resources are available:
Documentation Documentation
iugu_loader = IuguLoader("charges")
# Create a vectorstore retriver from the loader
# see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details
index = VectorstoreIndexCreator().from_loaders([iugu_loader])
iugu_doc_retriever = index.vectorstore.as_retriever()
previous
Image captions
next
Joplin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/iugu.html |
fc980b0055af-0 | .ipynb
.pdf
Obsidian
Obsidian#
Obsidian is a powerful and extensible knowledge base
that works on top of your local folder of plain text files.
This notebook covers how to load documents from an Obsidian database.
Since Obsidian is just stored on disk as a folder of Markdown files, the loader just takes a path to this directory.
Obsidian files also sometimes contain metadata which is a YAML block at the top of the file. These values will be added to the document’s metadata. (ObsidianLoader can also be passed a collect_metadata=False argument to disable this behavior.)
from langchain.document_loaders import ObsidianLoader
loader = ObsidianLoader("<path-to-obsidian>")
docs = loader.load()
previous
Notion DB 1/2
next
Psychic
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/obsidian.html |
360ae471c325-0 | .ipynb
.pdf
CoNLL-U
CoNLL-U#
CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below.
Blank lines marking sentence boundaries.
Comment lines starting with hash (#).
This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples.
from langchain.document_loaders import CoNLLULoader
loader = CoNLLULoader("example_data/conllu.conllu")
document = loader.load()
document
[Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})]
previous
OpenAIWhisperParser
next
Copy Paste
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/conll-u.html |
ae89fa61f720-0 | .ipynb
.pdf
Psychic
Contents
Prerequisites
Loading documents
Converting the docs to embeddings
Psychic#
This notebook covers how to load documents from Psychic. See here for more details.
Prerequisites#
Follow the Quick Start section in this document
Log into the Psychic dashboard and get your secret key
Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify.
Loading documents#
Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library).
# Uncomment this to install psychicapi if you don't already have it installed
!poetry run pip -q install psychicapi
[notice] A new release of pip is available: 23.0.1 -> 23.1.2
[notice] To update, run: pip install --upgrade pip
from langchain.document_loaders import PsychicLoader
from psychicapi import ConnectorId
# Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value
# This loader uses our test credentials
google_drive_loader = PsychicLoader(
api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e",
connector_id=ConnectorId.gdrive.value,
connection_id="google-test"
)
documents = google_drive_loader.load()
Converting the docs to embeddings#
We can now convert these documents into embeddings and store them in a vector database like Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html |
ae89fa61f720-1 | from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQAWithSourcesChain
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever())
chain({"question": "what is psychic?"}, return_only_outputs=True)
previous
Obsidian
next
PySpark DataFrame Loader
Contents
Prerequisites
Loading documents
Converting the docs to embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html |
3baf49ebb07e-0 | .ipynb
.pdf
Trello
Contents
Features
Trello#
Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a “board” where users can create lists and cards to represent their tasks and activities.
The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trello
This currently supports api_key/token only.
Credentials generation: https://trello.com/power-ups/admin/
Click in the manual token generation link to get the token.
To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method.
This loader allows you to provide the board name to pull in the corresponding cards into Document objects.
Notice that the board “name” is also called “title” in oficial documentation:
https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/
You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata.
Features#
Load cards from a Trello board.
Filter cards based on their status (open or closed).
Include card names, comments, and checklists in the loaded documents.
Customize the additional metadata fields to include in the document.
By default all card fields are included for the full text page_content and metadata accordinly.
#!pip install py-trello beautifulsoup4
# If you have already set the API key and token using environment variables,
# you can skip this cell and comment out the `api_key` and `token` named arguments
# in the initialization steps below.
from getpass import getpass
API_KEY = getpass()
TOKEN = getpass() | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html |
3baf49ebb07e-1 | from getpass import getpass
API_KEY = getpass()
TOKEN = getpass()
········
········
from langchain.document_loaders import TrelloLoader
# Get the open cards from "Awesome Board"
loader = TrelloLoader.from_credentials(
"Awesome Board",
api_key=API_KEY,
token=TOKEN,
card_filter="open",
)
documents = loader.load()
print(documents[0].page_content)
print(documents[0].metadata)
Review Tech partner pages
Comments:
{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''}
# Get all the cards from "Awesome Board" but only include the
# card list(column) as extra metadata.
loader = TrelloLoader.from_credentials(
"Awesome Board",
api_key=API_KEY,
token=TOKEN,
extra_metadata=("list"),
)
documents = loader.load()
print(documents[0].page_content)
print(documents[0].metadata)
Review Tech partner pages
Comments:
{'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'}
# Get the cards from "Another Board" and exclude the card name,
# checklist and comments from the Document page_content text.
loader = TrelloLoader.from_credentials(
"test", | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html |
3baf49ebb07e-2 | loader = TrelloLoader.from_credentials(
"test",
api_key=API_KEY,
token=TOKEN,
include_card_name= False,
include_checklist= False,
include_comments= False,
)
documents = loader.load()
print("Document: " + documents[0].page_content)
print(documents[0].metadata)
Contents
Features
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html |
cc50dd8d2294-0 | .ipynb
.pdf
WebBaseLoader
Contents
Loading multiple webpages
Load multiple urls concurrently
Loading a xml file, or using a different BeautifulSoup parser
WebBaseLoader#
This covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://www.espn.com/")
data = loader.load()
data | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-1 | [Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANCAAMNCAAWNHLSoccer…MLBNCAAFGolfTennisSports BettingBoxingCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWorld Baseball ClassicWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-2 | Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAre you ready for Opening Day? Here's your guide to MLB's offseason chaosWait, Jacob deGrom is on the Rangers now? Xander Bogaerts and Trea Turner signed where? And what about Carlos Correa? Yeah, you're going to need to read up before Opening Day.12hESPNIllustration by ESPNEverything you missed in the MLB offseason3h2:33World Series odds, win totals, props for every teamPlay fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-3 | fantasy baseball for free!TOP HEADLINESQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersLAMAR WANTS OUT OF BALTIMOREMarcus Spears identifies the two teams that need Lamar Jackson the most8h2:00Would Lamar sit out? Will Ravens draft a QB? Jackson trade request insightsLamar Jackson has asked Baltimore to trade him, but Ravens coach John Harbaugh hopes the QB will be back.3hJamison HensleyBallard, Colts will consider trading for QB JacksonJackson to Indy? Washington? Barnwell ranks the QB's trade fitsSNYDER'S TUMULTUOUS 24-YEAR RUNHow Washington’s NFL franchise sank on and off the field under owner Dan SnyderSnyder purchased one of the NFL's marquee franchises in 1999. Twenty-four years later, and with the team up for sale, he leaves a legacy of on-field futility and off-field scandal.13hJohn KeimESPNIOWA STAR STEPS UP AGAINJ-Will: Caitlin Clark is the biggest brand in college sports right now8h0:47'The better the opponent, the better she plays': Clark draws comparisons to TaurasiCaitlin Clark's performance on Sunday had longtime observers going back decades to find comparisons.16hKevin PeltonWOMEN'S ELITE EIGHT SCOREBOARDMONDAY'S GAMESCheck your bracket!NBA DRAFTHow top prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-4 | prospects fared on the road to the Final FourThe 2023 NCAA tournament is down to four teams, and ESPN's Jonathan Givony recaps the players who saw their NBA draft stock change.11hJonathan GivonyAndy Lyons/Getty ImagesTALKING BASKETBALLWhy AD needs to be more assertive with LeBron on the court10h1:33Why Perk won't blame Kyrie for Mavs' woes8h1:48WHERE EVERY TEAM STANDSNew NFL Power Rankings: Post-free-agency 1-32 poll, plus underrated offseason movesThe free agent frenzy has come and gone. Which teams have improved their 2023 outlook, and which teams have taken a hit?12hNFL Nation reportersIllustration by ESPNTHE BUCK STOPS WITH BELICHICKBruschi: Fair to criticize Bill Belichick for Patriots' struggles10h1:27 Top HeadlinesQB Jackson has requested trade from RavensSources: Texas hiring Terry as full-time coachJets GM: No rush on Rodgers; Lamar not optionLove to leave North Carolina, enter transfer portalBelichick to angsty Pats fans: See last 25 yearsEmbiid out, Harden due back vs. Jokic, NuggetsLynch: Purdy 'earned the right' to start for NinersMan Utd, Wrexham plan July friendly in San DiegoOn paper, Padres overtake DodgersFavorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InMarch Madness LiveESPNMarch Madness LiveWatch every men's NCAA tournament game live! ICYMI1:42Austin Peay's coach, pitcher and catcher all ejected after retaliation pitchAustin Peay's pitcher, catcher and coach were all ejected after a pitch was thrown at Liberty's Nathan Keeter, who earlier in the game hit a home run and celebrated while running down the third-base line. Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-5 | Men's Tournament ChallengeIllustration by ESPNMen's Tournament ChallengeCheck your bracket(s) in the 2023 Men's Tournament Challenge, which you can follow throughout the Big Dance. Women's Tournament ChallengeIllustration by ESPNWomen's Tournament ChallengeCheck your bracket(s) in the 2023 Women's Tournament Challenge, which you can follow throughout the Big Dance. Best of ESPN+AP Photo/Lynne SladkyFantasy Baseball ESPN+ Cheat Sheet: Sleepers, busts, rookies and closersYou've read their names all preseason long, it'd be a shame to forget them on draft day. The ESPN+ Cheat Sheet is one way to make sure that doesn't happen.Steph Chambers/Getty ImagesPassan's 2023 MLB season preview: Bold predictions and moreOpening Day is just over a week away -- and Jeff Passan has everything you need to know covered from every possible angle.Photo by Bob Kupbens/Icon Sportswire2023 NFL free agency: Best team fits for unsigned playersWhere could Ezekiel Elliott land? Let's match remaining free agents to teams and find fits for two trade candidates.Illustration by ESPN2023 NFL mock draft: Mel Kiper's first-round pick predictionsMel Kiper Jr. makes his predictions for Round 1 of the NFL draft, including projecting a trade in the top five. Trending NowAnne-Marie Sorvin-USA TODAY SBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker.Mandatory Credit: William Purnell-USA TODAY Sports2023 NFL full draft order: AFC, NFC team picks for all roundsStarting with the Carolina Panthers at No. 1 overall, here's the entire 2023 NFL draft broken down round by round. How to Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-6 | Watch on ESPN+Gregory Fisher/Icon Sportswire2023 NCAA men's hockey: Results, bracket, how to watchThe matchups in Tampa promise to be thrillers, featuring plenty of star power, high-octane offense and stellar defense.(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.Hailie Lynch/XFLHow to watch the XFL: 2023 schedule, teams, players, news, moreEvery XFL game will be streamed on ESPN+. Find out when and where else you can watch the eight teams compete. Sign up to play the #1 Fantasy Baseball GameReactivate A LeagueCreate A LeagueJoin a Public LeaguePractice With a Mock DraftSports BettingAP Photo/Mike KropfMarch Madness betting 2023: Bracket odds, lines, tips, moreThe 2023 NCAA tournament brackets have finally been released, and we have everything you need to know to make a bet on all of the March Madness games. Sign up to play the #1 Fantasy game!Create A LeagueJoin Public LeagueReactivateMock Draft Now\n\nESPN+\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nXFL\n\n\n\n\n\n\n\nMLB: Select Games\n\n\n\n\n\n\n\nNCAA Baseball\n\n\n\n\n\n\n\nNCAA Softball\n\n\n\n\n\n\n\nCricket: Select Matches\n\n\n\n\n\n\n\nMel Kiper's NFL Mock Draft 3.0\n\n\nQuick Links\n\n\n\n\nMen's Tournament Challenge\n\n\n\n\n\n\n\nWomen's Tournament Challenge\n\n\n\n\n\n\n\nNFL Draft Order\n\n\n\n\n\n\n\nHow To Watch | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-7 | Draft Order\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch PGA TOUR\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)] | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
cc50dd8d2294-8 | """
# Use this piece of code for testing new custom BeautifulSoup parsers
import requests
from bs4 import BeautifulSoup
html_doc = requests.get("{INSERT_NEW_URL_HERE}")
soup = BeautifulSoup(html_doc.text, 'html.parser')
# Beautiful soup logic to be exported to langchain.document_loaders.webpage.py
# Example: transcript = soup.select_one("td[class='scrtext']").text
# BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
""";
Loading multiple webpages#
You can also load multiple webpages at once by passing in a list of urls to the loader. This will return a list of documents in the same order as the urls passed in.
loader = WebBaseLoader(["https://www.espn.com/", "https://google.com"])
docs = loader.load()
docs | https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/web_base.html |
Subsets and Splits