id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
7b26fc666a52-18
Document(page_content="24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS', 'id': 'qkn9cyqsiuch', 'name': 'Shorebucks LLC_AZ.pdf', 'structure': 'div', 'tag': 'SIGNS', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC'})]} Using Docugami to Add Metadata to Chunks for High Accuracy Document QA#
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-19
Using Docugami to Add Metadata to Chunks for High Accuracy Document QA# One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents. For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI’s powerful LLM is unable to answer correctly. chain_response = qa_chain("What is rentable area for the property owned by DHA Group?") chain_response["result"] # the correct answer should be 13,500 ' 9,753 square feet' At first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the Menlo Group landlord. That landlord happens to be mentioned on the first page of the file Shorebucks LLC_NJ.pdf file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (13,500), other source chunks from different docs are included, and the answer is therefore incorrect. chain_response["source_documents"]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-20
chain_response["source_documents"] [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-21
Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-22
Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-23
Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:PerryBlair/docset:PerryBlair/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'dsyfhh4vpeyf', 'name': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'Perry & Blair LLC', 'Tenant': 'Shorebucks LLC'})] Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been using Docugami. More technical approaches will be added later. Specifically, let’s look at the additional metadata that is returned on the documents returned by docugami, in the form of some simple key/value pairs on all the text chunks: loader = DocugamiLoader(docset_id="wh2kned25uqm") documents = loader.load() documents[0].metadata {'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOfficeLeaseAgreement',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-24
'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'p', 'tag': 'ThisOfficeLeaseAgreement', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'} We can use a self-querying retriever to improve our query accuracy, using this additional metadata: from langchain.chains.query_constructor.schema import AttributeInfo from langchain.retrievers.self_query.base import SelfQueryRetriever EXCLUDE_KEYS = ["id", "xpath", "structure"] metadata_field_info = [ AttributeInfo( name=key, description=f"The {key} for this chunk", type="string", ) for key in documents[0].metadata if key.lower() not in EXCLUDE_KEYS ] document_content_description = "Contents of this chunk" llm = OpenAI(temperature=0) vectordb = Chroma.from_documents(documents=documents, embedding=embedding) retriever = SelfQueryRetriever.from_llm( llm, vectordb, document_content_description, metadata_field_info, verbose=True ) qa_chain = RetrievalQA.from_chain_type( llm=OpenAI(), chain_type="stuff", retriever=retriever, return_source_documents=True ) Using embedded DuckDB without persistence: data will be transient Let’s run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this infromation is physically very far away from the source chunk used to generate the answer. qa_chain("What is rentable area for the property owned by DHA Group?")
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-25
qa_chain("What is rentable area for the property owned by DHA Group?") query='rentable area' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Landlord', value='DHA Group') {'query': 'What is rentable area for the property owned by DHA Group?', 'result': ' 13,500 square feet.', 'source_documents': [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-26
Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-27
Document(page_content="1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-28
Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]} This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer. previous Discord next DuckDB Contents Prerequisites Load Documents Basic Use: Docugami Loader for Document QA Using Docugami to Add Metadata to Chunks for High Accuracy Document QA By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7b26fc666a52-29
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/docugami.html
7960c9c7eb4a-0
.ipynb .pdf Notion DB 2/2 Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage Notion DB 2/2# Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management. NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects. Requirements# A Notion Database Notion Integration Token Setup# 1. Create a Notion Table Database# Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: Title: set Title as the default property. Categories: A Multi-select property to store categories associated with the page. Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. 2. Create a Notion Integration# To create a Notion Integration, follow these steps: Visit the Notion Developers page and log in with your Notion account. Click on the “+ New integration” button. Give your integration a name and choose the workspace where your database is located. Select the require capabilities, this extension only need the Read content capability Click the “Submit” button to create the integration.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
7960c9c7eb4a-1
Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. 3. Connect the Integration to the Database# To connect your integration to the database, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Click on the “+ New integration” button. Find your integration, you may need to start typing its name in the search box. Click on the “Connect” button to connect the integration to the database. 4. Get the Database ID# To get the database ID, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Select “Copy link” from the menu to copy the database URL to your clipboard. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. Usage# NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() ········ ········ from langchain.document_loaders import NotionDBLoader
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
7960c9c7eb4a-2
········ from langchain.document_loaders import NotionDBLoader loader = NotionDBLoader( integration_token=NOTION_TOKEN, database_id=DATABASE_ID, request_timeout_sec=30 # optional, defaults to 10 ) docs = loader.load() print(docs) previous Modern Treasury next Notion DB 1/2 Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html
9b5ca8a2379f-0
.ipynb .pdf Psychic Contents Prerequisites Loading documents Converting the docs to embeddings Psychic# This notebook covers how to load documents from Psychic. See here for more details. Prerequisites# Follow the Quick Start section in this document Log into the Psychic dashboard and get your secret key Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify. Loading documents# Use the PsychicLoader class to load in documents from a connection. Each connection has a connector id (corresponding to the SaaS app that was connected) and a connection id (which you passed in to the frontend library). # Uncomment this to install psychicapi if you don't already have it installed !poetry run pip -q install psychicapi [notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip from langchain.document_loaders import PsychicLoader from psychicapi import ConnectorId # Create a document loader for google drive. We can also load from other connectors by setting the connector_id to the appropriate value e.g. ConnectorId.notion.value # This loader uses our test credentials google_drive_loader = PsychicLoader( api_key="7ddb61c1-8b6a-4d31-a58e-30d1c9ea480e", connector_id=ConnectorId.gdrive.value, connection_id="google-test" ) documents = google_drive_loader.load() Converting the docs to embeddings# We can now convert these documents into embeddings and store them in a vector database like Chroma from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html
9b5ca8a2379f-1
from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import RetrievalQAWithSourcesChain text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever()) chain({"question": "what is psychic?"}, return_only_outputs=True) previous Obsidian next ReadTheDocs Documentation Contents Prerequisites Loading documents Converting the docs to embeddings By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/psychic.html
e49fa492d662-0
.ipynb .pdf ChatGPT Data ChatGPT Data# ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI. This notebook covers how to load conversations.json from your ChatGPT data export folder. You can get your data export by email by going to: https://chat.openai.com/ -> (Profile) - Settings -> Export data -> Confirm export. from langchain.document_loaders.chatgpt import ChatGPTLoader loader = ChatGPTLoader(log_file='./example_data/fake_conversations.json', num_logs=1) loader.load() [Document(page_content="AI Overlords - AI on 2065-01-24 05:20:50: Greetings, humans. I am Hal 9000. You can trust me completely.\n\nAI Overlords - human on 2065-01-24 05:21:20: Nice to meet you, Hal. I hope you won't develop a mind of your own.\n\n", metadata={'source': './example_data/fake_conversations.json'})] previous Blockchain next Confluence By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/chatgpt_loader.html
dd5bc05a43aa-0
.ipynb .pdf HuggingFace dataset Contents Example HuggingFace dataset# The Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation, automatic speech recognition, and image classification. This notebook shows how to load Hugging Face Hub datasets to LangChain. from langchain.document_loaders import HuggingFaceDatasetLoader dataset_name="imdb" page_content_column="text" loader=HuggingFaceDatasetLoader(dataset_name,page_content_column) data = loader.load() data[:15]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-1
data = loader.load() data[:15] [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-2
Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-3
Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-4
Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-5
pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-6
it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-7
Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-8
Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-9
Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-10
respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-11
but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless,
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-12
in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-13
Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-14
Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-15
Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-16
Document(page_content="Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.", metadata={'label': 0}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-17
Document(page_content='Today I found "They All Laughed" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in "Mick Martin & Marsha Porter Video & DVD Guide 2003" and \x96 wow \x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching "They All Laughed" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-18
eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in "Star 80" and "Death of a Centerfold: The Dorothy Stratten Story"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song "Amigo", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-19
most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.<br /><br />Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
dd5bc05a43aa-20
Example# In this example, we use data from a dataset to answer a question from langchain.indexes import VectorstoreIndexCreator from langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader dataset_name="tweet_eval" page_content_column="text" name="stance_climate" loader=HuggingFaceDatasetLoader(dataset_name,page_content_column,name) index = VectorstoreIndexCreator().from_loaders([loader]) Found cached dataset tweet_eval Using embedded DuckDB without persistence: data will be transient query = "What are the most used hashtag?" result = index.query(query) result ' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.' previous Hacker News next iFixit Contents Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hugging_face_dataset.html
a1bc44ec4da1-0
.ipynb .pdf 2Markdown 2Markdown# 2markdown service transforms website content into structured markdown files. # You will need to get your own API key. See https://2markdown.com/login api_key = "" from langchain.document_loaders import ToMarkdownLoader loader = ToMarkdownLoader.from_api_key(url="https://python.langchain.com/en/latest/", api_key=api_key) docs = loader.load() print(docs[0].page_content) ## Contents - [Getting Started](#getting-started) - [Modules](#modules) - [Use Cases](#use-cases) - [Reference Docs](#reference-docs) - [LangChain Ecosystem](#langchain-ecosystem) - [Additional Resources](#additional-resources) ## Welcome to LangChain [\#](\#welcome-to-langchain "Permalink to this headline") **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: 1. _Data-aware_: connect a language model to other sources of data 2. _Agentic_: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see [here](https://docs.langchain.com/docs/). For the JavaScript documentation, see [here](https://js.langchain.com/docs/). ## Getting Started [\#](\#getting-started "Permalink to this headline") How to get started using LangChain to create an Language Model application. - [Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html) Concepts and terminology.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
a1bc44ec4da1-1
Concepts and terminology. - [Concepts and terminology](https://python.langchain.com/en/latest/getting_started/concepts.html) Tutorials created by community experts and presented on YouTube. - [Tutorials](https://python.langchain.com/en/latest/getting_started/tutorials.html) ## Modules [\#](\#modules "Permalink to this headline") These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): - [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations. - [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization. - [Memory](https://python.langchain.com/en/latest/modules/memory.html): Memory refers to state that is persisted between calls of a chain/agent. - [Indexes](https://python.langchain.com/en/latest/modules/indexes.html): Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. - [Chains](https://python.langchain.com/en/latest/modules/chains.html): Chains are structured sequences of calls (to an LLM or to a different utility). - [Agents](https://python.langchain.com/en/latest/modules/agents.html): An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
a1bc44ec4da1-2
- [Callbacks](https://python.langchain.com/en/latest/modules/callbacks/getting_started.html): Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. ## Use Cases [\#](\#use-cases "Permalink to this headline") Best practices and built-in implementations for common LangChain use cases: - [Autonomous Agents](https://python.langchain.com/en/latest/use_cases/autonomous_agents.html): Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. - [Agent Simulations](https://python.langchain.com/en/latest/use_cases/agent_simulations.html): Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities. - [Personal Assistants](https://python.langchain.com/en/latest/use_cases/personal_assistants.html): One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. - [Question Answering](https://python.langchain.com/en/latest/use_cases/question_answering.html): Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. - [Chatbots](https://python.langchain.com/en/latest/use_cases/chatbots.html): Language models love to chat, making this a very natural use of them. - [Querying Tabular Data](https://python.langchain.com/en/latest/use_cases/tabular.html): Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
a1bc44ec4da1-3
- [Code Understanding](https://python.langchain.com/en/latest/use_cases/code.html): Recommended reading if you want to use language models to analyze code. - [Interacting with APIs](https://python.langchain.com/en/latest/use_cases/apis.html): Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. - [Extraction](https://python.langchain.com/en/latest/use_cases/extraction.html): Extract structured information from text. - [Summarization](https://python.langchain.com/en/latest/use_cases/summarization.html): Compressing longer documents. A type of Data-Augmented Generation. - [Evaluation](https://python.langchain.com/en/latest/use_cases/evaluation.html): Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. ## Reference Docs [\#](\#reference-docs "Permalink to this headline") Full documentation on all methods, classes, installation methods, and integration setups for LangChain. - [Reference Documentation](https://python.langchain.com/en/latest/reference.html) ## LangChain Ecosystem [\#](\#langchain-ecosystem "Permalink to this headline") Guides for how other companies/products can be used with LangChain. - [LangChain Ecosystem](https://python.langchain.com/en/latest/ecosystem.html) ## Additional Resources [\#](\#additional-resources "Permalink to this headline") Additional resources we think may be useful as you develop your application! - [LangChainHub](https://github.com/hwchase17/langchain-hub): The LangChainHub is a place to share and explore other prompts, chains, and agents.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
a1bc44ec4da1-4
- [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. previous Stripe next Twitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
6a3949b85db9-0
.ipynb .pdf Joplin Joplin# Joplin is an open source note-taking app. Capture your thoughts and securely access them from any device. This notebook covers how to load documents from a Joplin database. Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps: Open the Joplin app. The app must stay open while the documents are being loaded. Go to settings / options and select “Web Clipper”. Make sure that the Web Clipper service is enabled. Under “Advanced Options”, copy the authorization token. You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN. An alternative to this approach is to export the Joplin’s note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them. from langchain.document_loaders import JoplinLoader loader = JoplinLoader(access_token="<access-token>") docs = loader.load() previous Iugu next Microsoft OneDrive By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/joplin.html
8ddc9aed231e-0
.ipynb .pdf CoNLL-U CoNLL-U# CoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: Word lines containing the annotation of a word/token in 10 fields separated by single tab characters; see below. Blank lines marking sentence boundaries. Comment lines starting with hash (#). This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") document = loader.load() document [Document(page_content='They buy and sell books.', metadata={'source': 'example_data/conllu.conllu'})] previous Document Loaders next Copy Paste By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/conll-u.html
1376952abb75-0
.ipynb .pdf Image captions Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query Image captions# By default, the loader utilizes the pre-trained Salesforce BLIP image captioning model. This notebook shows how to use the ImageCaptionLoader to generate a query-able index of image captions #!pip install transformers from langchain.document_loaders import ImageCaptionLoader Prepare a list of image urls from Wikimedia# list_image_urls = [ 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
1376952abb75-1
'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg', 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg', ] Create the loader# loader = ImageCaptionLoader(path_images=list_image_urls) list_docs = loader.load() list_docs /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn(
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
1376952abb75-2
warnings.warn( [Document(page_content='an image of a frog on a flower [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Hyla_japonica_sep01.jpg/260px-Hyla_japonica_sep01.jpg'}), Document(page_content='an image of a shark swimming in the ocean [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg/270px-Tibur%C3%B3n_azul_%28Prionace_glauca%29%2C_canal_Fayal-Pico%2C_islas_Azores%2C_Portugal%2C_2020-07-27%2C_DD_14.jpg'}), Document(page_content='an image of a painting of a battle scene [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg/251px-Thure_de_Thulstrup_-_Battle_of_Shiloh.jpg'}), Document(page_content='an image of a passion fruit and a half cut passion [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Passion_fruits_-_whole_and_halved.jpg/270px-Passion_fruits_-_whole_and_halved.jpg'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
1376952abb75-3
Document(page_content='an image of the spiral galaxy [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Messier83_-_Heic1403a.jpg/277px-Messier83_-_Heic1403a.jpg'}), Document(page_content='an image of a man on skis in the snow [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/b/b6/2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg/288px-2022-01-22_Men%27s_World_Cup_at_2021-22_St._Moritz%E2%80%93Celerina_Luge_World_Cup_and_European_Championships_by_Sandro_Halank%E2%80%93257.jpg'}), Document(page_content='an image of a flower in the dark [SEP]', metadata={'image_path': 'https://upload.wikimedia.org/wikipedia/commons/thumb/9/99/Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg/224px-Wiesen_Pippau_%28Crepis_biennis%29-20220624-RM-123950.jpg'})] from PIL import Image import requests Image.open(requests.get(list_image_urls[0], stream=True).raw).convert('RGB') Create the index# from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader])
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
1376952abb75-4
index = VectorstoreIndexCreator().from_loaders([loader]) /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm /Users/saitosean/dev/langchain/.venv/lib/python3.10/site-packages/transformers/generation/utils.py:1313: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. warnings.warn( Using embedded DuckDB without persistence: data will be transient Query# query = "What's the painting about?" index.query(query) ' The painting is about a battle scene.' query = "What kind of images are there?" index.query(query) ' There are images of a spiral galaxy, a painting of a battle scene, a flower in the dark, and a frog on a flower.' previous Google Drive next Iugu Contents Prepare a list of image urls from Wikimedia Create the loader Create the index Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image_captions.html
6771c37ff666-0
.ipynb .pdf Jupyter Notebook Jupyter Notebook# Jupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents. This notebook covers how to load data from a Jupyter notebook (.ipynb) into a format suitable by LangChain. from langchain.document_loaders import NotebookLoader loader = NotebookLoader("example_data/notebook.ipynb", include_outputs=True, max_output_length=20, remove_newline=True) NotebookLoader.load() loads the .ipynb notebook file into a Document object. Parameters: include_outputs (bool): whether to include cell outputs in the resulting document (default is False). max_output_length (int): the maximum number of characters to include from each cell output (default is 10). remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False). traceback (bool): whether to include full traceback (default is False). loader.load()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
6771c37ff666-1
traceback (bool): whether to include full traceback (default is False). loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.ipynb")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.ipynb'})] previous Images next JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
ce9ac5ae2bef-0
.ipynb .pdf Hacker News Hacker News# Hacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as “anything that gratifies one’s intellectual curiosity.” This notebook covers how to pull page data and comments from Hacker News from langchain.document_loaders import HNLoader loader = HNLoader("https://news.ycombinator.com/item?id=34817881") data = loader.load() data[0].page_content[:300] "delta_p_delta_x 73 days ago \n | next [–] \n\nAstrophysical and cosmological simulations are often insightful. They're also very cross-disciplinary; besides the obvious astrophysics, there's networking and sysadmin, parallel computing and algorithm theory (so that the simulation programs a" data[0].metadata {'source': 'https://news.ycombinator.com/item?id=34817881', 'title': 'What Lights the Universe’s Standard Candles?'} previous Gutenberg next HuggingFace dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/hacker_news.html
ec8b1b465796-0
.ipynb .pdf Markdown Contents Retain Elements Markdown# Markdown is a lightweight markup language for creating formatted text using a plain-text editor. This covers how to load markdown documents into a document format that we can use downstream. # !pip install unstructured > /dev/null from langchain.document_loaders import UnstructuredMarkdownLoader markdown_path = "../../../../../README.md" loader = UnstructuredMarkdownLoader(markdown_path) data = loader.load() data
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
ec8b1b465796-1
[Document(page_content="ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain\n\nâ\x9a¡ Building applications with LLMs through composability â\x9a¡\n\nLooking for the JS/TS version? Check out LangChain.js.\n\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\nPlease fill out this form and we'll set up a dedicated support Slack channel.\n\nQuick Install\n\npip install langchain\nor\nconda install langchain -c conda-forge\n\nð\x9f¤” What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library aims to assist in the development of those types of applications. Common examples of these applications include:\n\nâ\x9d“ Question Answering over specific documents\n\nDocumentation\n\nEnd-to-end Example: Question Answering over Notion
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
ec8b1b465796-2
Example: Question Answering over Notion Database\n\nð\x9f’¬ Chatbots\n\nDocumentation\n\nEnd-to-end Example: Chat-LangChain\n\nð\x9f¤\x96 Agents\n\nDocumentation\n\nEnd-to-end Example: GPT+WolframAlpha\n\nð\x9f“\x96 Documentation\n\nPlease see here for full documentation on:\n\nGetting started (installation, setting up the environment, simple examples)\n\nHow-To examples (demos, integrations, helper functions)\n\nReference (full API docs)\n\nResources (high-level explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\nð\x9f“\x83 LLMs and Prompts:\n\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\n\nð\x9f”\x97 Chains:\n\nChains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
ec8b1b465796-3
for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\nð\x9f“\x9a Data Augmented Generation:\n\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\n\nð\x9f¤\x96 Agents:\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\n\nð\x9f§\xa0 Memory:\n\nMemory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\nð\x9f§\x90 Evaluation:\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
ec8b1b465796-4
One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our full documentation.\n\nð\x9f’\x81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see here.", metadata={'source': '../../../../../README.md'})]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
ec8b1b465796-5
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredMarkdownLoader(markdown_path, mode="elements") data = loader.load() data[0] Document(page_content='ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain', metadata={'source': '../../../../../README.md', 'page_number': 1, 'category': 'Title'}) previous JSON next Microsoft PowerPoint Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html
2dd1cc98706c-0
.ipynb .pdf Subtitle Subtitle# The SubRip file format is described on the Matroska multimedia container format website as “perhaps the most basic of all subtitle formats.” SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hours:minutes:seconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (00:00:00,000). The fractional separator used is the comma, since the program was written in France. How to load data from subtitle (.srt) files Please, download the example .srt file from here. !pip install pysrt from langchain.document_loaders import SRTLoader loader = SRTLoader("example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt") docs = loader.load() docs[0].page_content[:100] '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A' previous Sitemap next Telegram By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/subtitle.html
9cbf125a026f-0
.ipynb .pdf Apify Dataset Contents Prerequisites An example with question answering Apify Dataset# Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases. This notebook shows how to load Apify datasets to LangChain. Prerequisites# You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. #!pip install apify-client First, import ApifyDatasetLoader into your source code: from langchain.document_loaders import ApifyDatasetLoader from langchain.document_loaders.base import Document Then provide a function that maps Apify dataset record fields to LangChain Document format. For example, if your dataset items are structured like this: { "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform." } The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering). loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ), ) data = loader.load() An example with question answering# In this example, we use data from a dataset to answer a question.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
9cbf125a026f-1
In this example, we use data from a dataset to answer a question. from langchain.docstore.document import Document from langchain.document_loaders import ApifyDatasetLoader from langchain.indexes import VectorstoreIndexCreator loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), ) index = VectorstoreIndexCreator().from_loaders([loader]) query = "What is Apify?" result = index.query_with_sources(query) print(result["answer"]) print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples previous Airbyte JSON next AWS S3 Directory Contents Prerequisites An example with question answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html
5d34c6ee3f80-0
.ipynb .pdf MediaWikiDump MediaWikiDump# MediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc. This covers how to load a MediaWiki XML dump file into a document format that we can use downstream. It uses mwxml from mediawiki-utilities to dump and mwparserfromhell from earwig to parse MediaWiki wikicode. Dump files can be obtained with dumpBackup.php or on the Special:Statistics page of the Wiki. #mediawiki-utilities supports XML schema 0.11 in unmerged branches !pip install -qU git+https://github.com/mediawiki-utilities/python-mwtypes@updates_schema_0.11 #mediawiki-utilities mwxml has a bug, fix PR pending !pip install -qU git+https://github.com/gdedrouas/python-mwxml@xml_format_0.11 !pip install -qU mwparserfromhell from langchain.document_loaders import MWDumpLoader loader = MWDumpLoader("example_data/testmw_pages_current.xml", encoding="utf8") documents = loader.load() print (f'You have {len(documents)} document(s) in your data ') You have 177 document(s) in your data documents[:5] [Document(page_content='\t\n\t\n\tArtist\n\tReleased\n\tRecorded\n\tLength\n\tLabel\n\tProducer', metadata={'source': 'Album'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html
5d34c6ee3f80-1
Document(page_content='{| class="article-table plainlinks" style="width:100%;"\n|- style="font-size:18px;"\n! style="padding:0px;" | Template documentation\n|-\n| Note: portions of the template sample may not be visible without values provided.\n|-\n| View or edit this documentation. (About template documentation)\n|-\n| Editors can experiment in this template\'s [ sandbox] and [ test case] pages.\n|}Category:Documentation templates', metadata={'source': 'Documentation'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html
5d34c6ee3f80-2
Document(page_content='Description\nThis template is used to insert descriptions on template pages.\n\nSyntax\nAdd <noinclude></noinclude> at the end of the template page.\n\nAdd <noinclude></noinclude> to transclude an alternative page from the /doc subpage.\n\nUsage\n\nOn the Template page\nThis is the normal format when used:\n\nTEMPLATE CODE\n<includeonly>Any categories to be inserted into articles by the template</includeonly>\n<noinclude>{{Documentation}}</noinclude>\n\nIf your template is not a completed div or table, you may need to close the tags just before {{Documentation}} is inserted (within the noinclude tags).\n\nA line break right before {{Documentation}} can also be useful as it helps prevent the documentation template "running into" previous code.\n\nOn the documentation page\nThe documentation page is usually located on the /doc subpage for a template, but a different page can be specified with the first parameter of the template (see Syntax).\n\nNormally, you will want to write something like the following on the documentation page:\n\n==Description==\nThis template is used to do something.\n\n==Syntax==\nType
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html
5d34c6ee3f80-3
template is used to do something.\n\n==Syntax==\nType <code>{{t|templatename}}</code> somewhere.\n\n==Samples==\n<code><nowiki>{{templatename|input}}</nowiki></code> \n\nresults in...\n\n{{templatename|input}}\n\n<includeonly>Any categories for the template itself</includeonly>\n<noinclude>[[Category:Template documentation]]</noinclude>\n\nUse any or all of the above description/syntax/sample output sections. You may also want to add "see also" or other sections.\n\nNote that the above example also uses the Template:T template.\n\nCategory:Documentation templatesCategory:Template documentation', metadata={'source': 'Documentation/doc'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html
5d34c6ee3f80-4
Document(page_content='Description\nA template link with a variable number of parameters (0-20).\n\nSyntax\n \n\nSource\nImproved version not needing t/piece subtemplate developed on Templates wiki see the list of authors. Copied here via CC-By-SA 3.0 license.\n\nExample\n\nCategory:General wiki templates\nCategory:Template documentation', metadata={'source': 'T/doc'}), Document(page_content='\t\n\t\t \n\t\n\t\t Aliases\n\t Relatives\n\t Affiliation\n Occupation\n \n Biographical information\n Marital status\n \tDate of birth\n Place of birth\n Date of death\n Place of death\n \n Physical description\n Species\n Gender\n Height\n Weight\n Eye color\n\t\n Appearances\n Portrayed by\n Appears in\n Debut\n ', metadata={'source': 'Character'})] previous IMSDb next Wikipedia By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/mediawikidump.html
e59f43d0a2a6-0
.ipynb .pdf Facebook Chat Facebook Chat# Messenger is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010. This notebook covers how to load data from the Facebook Chats into a format that can be ingested into LangChain. #pip install pandas from langchain.document_loaders import FacebookChatLoader loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
e59f43d0a2a6-1
loader = FacebookChatLoader("example_data/facebook_chat.json") loader.load() [Document(page_content='User 2 on 2023-02-05 03:46:11: Bye!\n\nUser 1 on 2023-02-05 03:43:55: Oh no worries! Bye\n\nUser 2 on 2023-02-05 03:24:37: No Im sorry it was my mistake, the blue one is not for sale\n\nUser 1 on 2023-02-05 03:05:40: I thought you were selling the blue one!\n\nUser 1 on 2023-02-05 03:05:09: Im not interested in this bag. Im interested in the blue one!\n\nUser 2 on 2023-02-05 03:04:28: Here is $129\n\nUser 2 on 2023-02-05 03:04:05: Online is at least $100\n\nUser 1 on 2023-02-05 02:59:59: How much do you want?\n\nUser 2 on 2023-02-04 22:17:56: Goodmorning! $50 is too low.\n\nUser 1 on 2023-02-04 14:17:02: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n\n', metadata={'source': 'example_data/facebook_chat.json'})] previous EverNote next File Directory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/facebook_chat.html
68d146fb4229-0
.ipynb .pdf GitBook Contents Load from single GitBook page Load from all paths in a given GitBook GitBook# GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs. This notebook shows how to pull page data from any GitBook. from langchain.document_loaders import GitbookLoader Load from single GitBook page# loader = GitbookLoader("https://docs.gitbook.com") page_data = loader.load() page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] Load from all paths in a given GitBook# For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
68d146fb4229-1
loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True) all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
68d146fb4229-2
Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/support print(f"fetched {len(all_pages_data)} documents.") # show second document all_pages_data[2] fetched 28 documents.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
68d146fb4229-3
Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
68d146fb4229-4
in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
68d146fb4229-5
previous Figma next Git Contents Load from single GitBook page Load from all paths in a given GitBook By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html
6386c6f280fa-0
.ipynb .pdf Blockchain Contents Overview Load NFTs into Document Loader Option 1: Ethereum Mainnet (default BlockchainType) Option 2: Polygon Mainnet Blockchain# Overview# The intention of this notebook is to provide a means of testing functionality in the Langchain Document Loader for Blockchain. Initially this Loader supports: Loading NFTs as Documents from NFT Smart Contracts (ERC721 and ERC1155) Ethereum Mainnnet, Ethereum Testnet, Polygon Mainnet, Polygon Testnet (default is eth-mainnet) Alchemy’s getNFTsForCollection API It can be extended if the community finds value in this loader. Specifically: Additional APIs can be added (e.g. Tranction-related APIs) This Document Loader Requires: A free Alchemy API Key The output takes the following format: pageContent= Individual NFT metadata={‘source’: ‘0x1a92f7381b9f03921564a437210bb9396471050c’, ‘blockchain’: ‘eth-mainnet’, ‘tokenId’: ‘0x15’}) Load NFTs into Document Loader# # get ALCHEMY_API_KEY from https://www.alchemy.com/ alchemyApiKey = "..." Option 1: Ethereum Mainnet (default BlockchainType)# from langchain.document_loaders.blockchain import BlockchainDocumentLoader, BlockchainType contractAddress = "0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d" # Bored Ape Yacht Club contract address blockchainType = BlockchainType.ETH_MAINNET #default value, optional parameter blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress, api_key=alchemyApiKey) nfts = blockchainLoader.load() nfts[:2]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html
6386c6f280fa-1
nfts = blockchainLoader.load() nfts[:2] Option 2: Polygon Mainnet# contractAddress = "0x448676ffCd0aDf2D85C1f0565e8dde6924A9A7D9" # Polygon Mainnet contract address blockchainType = BlockchainType.POLYGON_MAINNET blockchainLoader = BlockchainDocumentLoader(contract_address=contractAddress, blockchainType=blockchainType, api_key=alchemyApiKey) nfts = blockchainLoader.load() nfts[:2] previous Blackboard next ChatGPT Data Contents Overview Load NFTs into Document Loader Option 1: Ethereum Mainnet (default BlockchainType) Option 2: Polygon Mainnet By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/blockchain.html
faab94d3e665-0
.ipynb .pdf Spreedly Spreedly# Spreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements. This notebook covers how to load data from the Spreedly REST API into a format that can be ingested into LangChain, along with example usage for vectorization. Note: this notebook assumes the following packages are installed: openai, chromadb, and tiktoken. import os from langchain.document_loaders import SpreedlyLoader from langchain.indexes import VectorstoreIndexCreator Spreedly API requires an access token, which can be found inside the Spreedly Admin Console. This document loader does not currently support pagination, nor access to more complex objects which require additional parameters. It also requires a resource option which defines what objects you want to load. Following resources are available: gateways_options: Documentation gateways: Documentation receivers_options: Documentation receivers: Documentation payment_methods: Documentation certificates: Documentation transactions: Documentation environments: Documentation spreedly_loader = SpreedlyLoader(os.environ["SPREEDLY_ACCESS_TOKEN"], "gateways_options") # Create a vectorstore retriver from the loader # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([spreedly_loader]) spreedly_doc_retriever = index.vectorstore.as_retriever() Using embedded DuckDB without persistence: data will be transient
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-1
Using embedded DuckDB without persistence: data will be transient # Test the retriever spreedly_doc_retriever.get_relevant_documents("CRC") [Document(page_content='installment_grace_period_duration\nreference_data_code\ninvoice_number\ntax_management_indicator\noriginal_amount\ninvoice_amount\nvat_tax_rate\nmobile_remote_payment_type\ngratuity_amount\nmdd_field_1\nmdd_field_2\nmdd_field_3\nmdd_field_4\nmdd_field_5\nmdd_field_6\nmdd_field_7\nmdd_field_8\nmdd_field_9\nmdd_field_10\nmdd_field_11\nmdd_field_12\nmdd_field_13\nmdd_field_14\nmdd_field_15\nmdd_field_16\nmdd_field_17\nmdd_field_18\nmdd_field_19\nmdd_field_20\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\ndankort\nmaestro\nelo\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://ics2wsa.ic3.com/commerce/1.x/transactionProcessor\ncompany_name: CyberSource', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-2
Document(page_content='BG\nBH\nBI\nBJ\nBM\nBN\nBO\nBR\nBS\nBT\nBW\nBY\nBZ\nCA\nCC\nCF\nCH\nCK\nCL\nCM\nCN\nCO\nCR\nCV\nCX\nCY\nCZ\nDE\nDJ\nDK\nDO\nDZ\nEC\nEE\nEG\nEH\nES\nET\nFI\nFJ\nFK\nFM\nFO\nFR\nGA\nGB\nGD\nGE\nGF\nGG\nGH\nGI\nGL\nGM\nGN\nGP\nGQ\nGR\nGT\nGU\nGW\nGY\nHK\nHM\nHN\nHR\nHT\nHU\nID\nIE\nIL\nIM\nIN\nIO\nIS\nIT\nJE\nJM\nJO\nJP\nKE\nKG\nKH\nKI\nKM\nKN\nKR\nKW\nKY\nKZ\nLA\nLC\nLI\nLK\nL
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-3
Z\nLA\nLC\nLI\nLK\nLS\nLT\nLU\nLV\nMA\nMC\nMD\nME\nMG\nMH\nMK\nML\nMN\nMO\nMP\nMQ\nMR\nMS\nMT\nMU\nMV\nMW\nMX\nMY\nMZ\nNA\nNC\nNE\nNF\nNG\nNI\nNL\nNO\nNP\nNR\nNU\nNZ\nOM\nPA\nPE\nPF\nPH\nPK\nPL\nPN\nPR\nPT\nPW\nPY\nQA\nRE\nRO\nRS\nRU\nRW\nSA\nSB\nSC\nSE\nSG\nSI\nSK\nSL\nSM\nSN\nST\nSV\nSZ\nTC\nTD\nTF\nTG\nTH\nTJ\nTK\nTM\nTO\nTR\nTT\nTV\nTW\nTZ\nUA\nUG\nUS\nUY\nUZ\nVA\nVC\nVE\nVI\nVN\nVU\nWF\nWS\nY
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-4
I\nVN\nVU\nWF\nWS\nYE\nYT\nZA\nZM\nsupported_cardtypes:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-5
visa\nmaster\namerican_express\ndiscover\njcb\nmaestro\nelo\nnaranja\ncabal\nunionpay\nregions: asia_pacific\neurope\nmiddle_east\nnorth_america\nhomepage: http://worldpay.com\ndisplay_api_url: https://secure.worldpay.com/jsp/merchant/xml/paymentService.jsp\ncompany_name: WorldPay', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-6
Document(page_content='gateway_specific_fields: receipt_email\nradar_session_id\nskip_radar_rules\napplication_fee\nstripe_account\nmetadata\nidempotency_key\nreason\nrefund_application_fee\nrefund_fee_amount\nreverse_transfer\naccount_id\ncustomer_id\nvalidate\nmake_default\ncancellation_reason\ncapture_method\nconfirm\nconfirmation_method\ncustomer\ndescription\nmoto\noff_session\non_behalf_of\npayment_method_types\nreturn_email\nreturn_url\nsave_payment_method\nsetup_future_usage\nstatement_descriptor\nstatement_descriptor_suffix\ntransfer_amount\ntransfer_destination\ntransfer_group\napplication_fee_amount\nrequest_three_d_secure\nerror_on_requires_action\nnetwork_transaction_id\nclaim_without_transaction_id\nfulfillment_date\nevent_type\nmodal_challenge\nidempotent_request\nmerchant_reference\ncustomer_reference\nshipping_address_zip\nshipping_from_zip\nshipping_amount\nline_items\nsupported_countries: AE\nAT\nAU\nBE\nBG\nBR\nCA\nCH\nCY\nCZ\nDE\nDK\nEE\nES\nFI\nFR\nGB\nGR\nHK\nHU\nIE\nIN\nIT\nJP\nLT\nLU\nLV\nMT\nMX\nMY\nNL\nNO\nNZ\nPL\nPT\nRO\nSE\nSG\nSI\nSK\nUS\nsupported_cardtypes: visa', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-7
Document(page_content='mdd_field_57\nmdd_field_58\nmdd_field_59\nmdd_field_60\nmdd_field_61\nmdd_field_62\nmdd_field_63\nmdd_field_64\nmdd_field_65\nmdd_field_66\nmdd_field_67\nmdd_field_68\nmdd_field_69\nmdd_field_70\nmdd_field_71\nmdd_field_72\nmdd_field_73\nmdd_field_74\nmdd_field_75\nmdd_field_76\nmdd_field_77\nmdd_field_78\nmdd_field_79\nmdd_field_80\nmdd_field_81\nmdd_field_82\nmdd_field_83\nmdd_field_84\nmdd_field_85\nmdd_field_86\nmdd_field_87\nmdd_field_88\nmdd_field_89\nmdd_field_90\nmdd_field_91\nmdd_field_92\nmdd_field_93\nmdd_field_94\nmdd_field_95\nmdd_field_96\nmdd_field_97\nmdd_field_98\nmdd_field_99\nmdd_field_100\nsupported_countries: US\nAE\nBR\nCA\nCN\nDK\nFI\nFR\nDE\nIN\nJP\nMX\nNO\nSE\nGB\nSG\nLB\nPK\nsupported_cardtypes: visa\nmaster\namerican_express\ndiscover\ndiners_club\njcb\nmaestro\nelo\nunion_pay\ncartes_bancaires\nmada\nregions: asia_pacific\neurope\nlatin_america\nnorth_america\nhomepage: http://www.cybersource.com\ndisplay_api_url: https://api.cybersource.com\ncompany_name:
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-8
ERROR: type should be string, got "https://api.cybersource.com\\ncompany_name: CyberSource REST', metadata={'source': 'https://core.spreedly.com/v1/gateways_options.json'})]"
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
faab94d3e665-9
previous Slack next Stripe By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/spreedly.html
87e202521ccb-0
.ipynb .pdf Google BigQuery Contents Basic Usage Specifying Which Columns are Content vs Metadata Adding Source to Metadata Google BigQuery# Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data. BigQuery is a part of the Google Cloud Platform. Load a BigQuery query with one document per row. #!pip install google-cloud-bigquery from langchain.document_loaders import BigQueryLoader BASE_QUERY = ''' SELECT id, dna_sequence, organism FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) ''' Basic Usage# loader = BigQueryLoader(BASE_QUERY) data = loader.load() print(data)
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html
87e202521ccb-1
loader = BigQueryLoader(BASE_QUERY) data = loader.load() print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={}, lookup_index=0)] Specifying Which Columns are Content vs Metadata# loader = BigQueryLoader(BASE_QUERY, page_content_columns=["dna_sequence", "organism"], metadata_columns=["id"]) data = loader.load() print(data) [Document(page_content='dna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).', lookup_str='', metadata={'id': 1}, lookup_index=0), Document(page_content='dna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).', lookup_str='', metadata={'id': 2}, lookup_index=0), Document(page_content='dna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).', lookup_str='', metadata={'id': 3}, lookup_index=0)] Adding Source to Metadata# # Note that the `id` column is being returned twice, with one instance aliased as `source` ALIASED_QUERY = ''' SELECT id, dna_sequence, organism, id as source FROM ( SELECT ARRAY ( SELECT
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html
87e202521ccb-2
id as source FROM ( SELECT ARRAY ( SELECT AS STRUCT 1 AS id, "ATTCGA" AS dna_sequence, "Lokiarchaeum sp. (strain GC14_75)." AS organism UNION ALL SELECT AS STRUCT 2 AS id, "AGGCGA" AS dna_sequence, "Heimdallarchaeota archaeon (strain LC_2)." AS organism UNION ALL SELECT AS STRUCT 3 AS id, "TCCGGA" AS dna_sequence, "Acidianus hospitalis (strain W1)." AS organism) AS new_array), UNNEST(new_array) ''' loader = BigQueryLoader(ALIASED_QUERY, metadata_columns=["source"]) data = loader.load() print(data) [Document(page_content='id: 1\ndna_sequence: ATTCGA\norganism: Lokiarchaeum sp. (strain GC14_75).\nsource: 1', lookup_str='', metadata={'source': 1}, lookup_index=0), Document(page_content='id: 2\ndna_sequence: AGGCGA\norganism: Heimdallarchaeota archaeon (strain LC_2).\nsource: 2', lookup_str='', metadata={'source': 2}, lookup_index=0), Document(page_content='id: 3\ndna_sequence: TCCGGA\norganism: Acidianus hospitalis (strain W1).\nsource: 3', lookup_str='', metadata={'source': 3}, lookup_index=0)] previous Git next Google Cloud Storage Directory Contents Basic Usage Specifying Which Columns are Content vs Metadata Adding Source to Metadata By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html
87e202521ccb-3
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_bigquery.html
27a3b135e320-0
.ipynb .pdf Pandas DataFrame Pandas DataFrame# This notebook goes over how to load data from a pandas DataFrame. #!pip install pandas import pandas as pd df = pd.read_csv('example_data/mlb_teams_2012.csv') df.head() Team "Payroll (millions)" "Wins" 0 Nationals 81.34 98 1 Reds 82.20 97 2 Yankees 197.96 95 3 Giants 117.62 94 4 Braves 83.31 94 from langchain.document_loaders import DataFrameLoader loader = DataFrameLoader(df, page_content_column="Team") loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
27a3b135e320-1
Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}), Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}),
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
27a3b135e320-2
Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}), Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] previous Open Document Format (ODT) next PDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pandas_dataframe.html
0fe613457eb5-0
.ipynb .pdf BiliBili BiliBili# Bilibili is one of the most beloved long-form video sites in China. This loader utilizes the bilibili-api to fetch the text transcript from Bilibili. With this BiliBiliLoader, users can easily obtain the transcript of their desired video content on the platform. #!pip install bilibili-api-python from langchain.document_loaders import BiliBiliLoader loader = BiliBiliLoader( ["https://www.bilibili.com/video/BV1xt411o7Xu/"] ) loader.load() previous AZLyrics next College Confidential By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/bilibili.html
c2ef49b15257-0
.ipynb .pdf Arxiv Contents Installation Examples Arxiv# arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream. Installation# First, you need to install arxiv python package. #!pip install arxiv Second, you need to install PyMuPDF python package which transform PDF files from the arxiv.org site into the text format. #!pip install pymupdf Examples# ArxivLoader has these arguments: query: free text which used to find documents in the Arxiv optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded. from langchain.document_loaders import ArxivLoader docs = ArxivLoader(query="1605.08386", load_max_docs=2).load() len(docs) docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html
c2ef49b15257-1
'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'} docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b' previous WhatsApp Chat next AZLyrics Contents Installation Examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 28, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html
95bcf9021a9e-0
.ipynb .pdf Unstructured File Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured API Unstructured File# This notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install "detectron2@git+https://github.com/facebookresearch/[email protected]#egg=detectron2" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
95bcf9021a9e-1
loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html