id
stringlengths 14
15
| text
stringlengths 27
2.12k
| source
stringlengths 49
118
|
---|---|---|
6d6e5aa52026-0 | Math chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/llm_math |
6d6e5aa52026-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalMath chainMath chainThis notebook showcases using LLMs and Python REPLs to do complex word math problems.from langchain import OpenAI, LLMMathChainllm = OpenAI(temperature=0)llm_math = LLMMathChain.from_llm(llm, verbose=True)llm_math.run("What is 13 raised to the .3432 power?") > Entering new LLMMathChain chain... What is 13 raised to the .3432 power? ```text 13 ** .3432 ``` ...numexpr.evaluate("13 ** .3432")... Answer: | https://python.langchain.com/docs/modules/chains/additional/llm_math |
6d6e5aa52026-2 | ** .3432")... Answer: 2.4116004626599237 > Finished chain. 'Answer: 2.4116004626599237'PreviousSelf-checking chainNextHTTP request chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/llm_math |
206a6d711b8a-0 | Program-aided language model (PAL) chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/pal |
206a6d711b8a-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalProgram-aided language model (PAL) chainOn this pageProgram-aided language model (PAL) chainImplements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.from langchain.chains import PALChainfrom langchain import OpenAIllm = OpenAI(temperature=0, max_tokens=512)Math Prompt​pal_chain = PALChain.from_math_prompt(llm, verbose=True)question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"pal_chain.run(question) > Entering new PALChain chain... def | https://python.langchain.com/docs/modules/chains/additional/pal |
206a6d711b8a-2 | > Entering new PALChain chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28'Colored Objects​pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True)question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"pal_chain.run(question) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if | https://python.langchain.com/docs/modules/chains/additional/pal |
206a6d711b8a-3 | Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished PALChain chain. '2'Intermediate Steps​You can also use the intermediate steps flag to return the code executed that generates the answer.pal_chain = PALChain.from_colored_object_prompt( llm, verbose=True, return_intermediate_steps=True)question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"result = pal_chain({"question": question}) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished chain.result["intermediate_steps"] "# Put objects into a list to record ordering\nobjects = []\nobjects += [('booklet', 'blue')] * 2\nobjects += [('booklet', 'purple')] * 2\nobjects += [('sunglasses', 'yellow')] * 2\n\n# Remove all | https://python.langchain.com/docs/modules/chains/additional/pal |
206a6d711b8a-4 | += [('sunglasses', 'yellow')] * 2\n\n# Remove all pairs of sunglasses\nobjects = [object for object in objects if object[0] != 'sunglasses']\n\n# Count number of purple objects\nnum_purple = len([object for object in objects if object[1] == 'purple'])\nanswer = num_purple"PreviousOpenAPI calls with OpenAI functionsNextQuestion-Answering CitationsMath PromptColored ObjectsIntermediate StepsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/pal |
26005730e786-0 | Extraction | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalExtractionOn this pageExtractionThe extraction chain uses the OpenAI functions parameter to specify a schema to extract entities from a document. This helps us make sure that the model outputs exactly the schema of entities and properties that we want, with their appropriate types.The extraction chain is to be used when we want to extract several entities with their properties from the same passage (i.e. what people were mentioned in this passage?)from langchain.chat_models import ChatOpenAIfrom langchain.chains import create_extraction_chain, create_extraction_chain_pydanticfrom langchain.prompts import ChatPromptTemplate /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-2 | UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")Extracting entities​To extract entities, we need to create a schema where we specify all the properties we want to find and the type we expect them to have. We can also specify which of these properties are required and which are optional.schema = { "properties": { "name": {"type": "string"}, "height": {"type": "integer"}, "hair_color": {"type": "string"}, }, "required": ["name", "height"],}inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde. """chain = create_extraction_chain(schema, llm)As we can see, we extracted the required entities and their properties in the required format (it even calculated Claudia's height before returning!)chain.run(inp) [{'name': 'Alex', 'height': 5, 'hair_color': 'blonde'}, {'name': 'Claudia', 'height': 6, 'hair_color': 'brunette'}]Several entity types​Notice that we are using OpenAI functions under the hood and thus the model can only call one function per request (with one, unique schema)If we want to extract more than one entity type, we need to introduce a little hack - we will define our properties with an included entity type. Following we have | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-3 | to introduce a little hack - we will define our properties with an included entity type. Following we have an example where we also want to extract dog attributes from the passage. Notice the 'person' and 'dog' prefixes we use for each property; this tells the model which entity type the property refers to. In this way, the model can return properties from several entity types in one single call.schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": ["person_name", "person_height"],}inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek. """chain = create_extraction_chain(schema, llm)People attributes and dog attributes were correctly extracted from the text in the same callchain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde', 'dog_name': 'Frosty', 'dog_breed': 'labrador'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}]Unrelated entities​What if our entities | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-4 | 'brunette'}]Unrelated entities​What if our entities are unrelated? In that case, the model will return the unrelated entities in different dictionaries, allowing us to successfully extract several unrelated entity types in the same call.Notice that we use required: []: we need to allow the model to return only person attributes or only dog attributes for a single entity (person or dog)schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, }, "required": [],}inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""chain = create_extraction_chain(schema, llm)We have each entity in its own separate dictionary, with only the appropriate attributes being returnedchain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', 'dog_breed': 'German Shepherd'}, {'dog_name': 'Milo', 'dog_breed': 'border collie'}]Extra info for an | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-5 | 'Milo', 'dog_breed': 'border collie'}]Extra info for an entity​What if.. we don't know what we want? More specifically, say we know a few properties we want to extract for a given entity but we also want to know if there's any extra information in the passage. Fortunately, we don't need to structure everything - we can have unstructured extraction as well. We can do this by introducing another hack, namely the extra_info attribute - let's see an example.schema = { "properties": { "person_name": {"type": "string"}, "person_height": {"type": "integer"}, "person_hair_color": {"type": "string"}, "dog_name": {"type": "string"}, "dog_breed": {"type": "string"}, "dog_extra_info": {"type": "string"}, },}inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""chain = create_extraction_chain(schema, llm)It is nice to know more about Willow and Milo!chain.run(inp) [{'person_name': 'Alex', 'person_height': 5, 'person_hair_color': 'blonde'}, {'person_name': 'Claudia', 'person_height': 6, 'person_hair_color': 'brunette'}, {'dog_name': 'Willow', | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-6 | {'dog_name': 'Willow', 'dog_breed': 'German Shepherd', 'dog_extra_information': 'likes to play with other dogs'}, {'dog_name': 'Milo', 'dog_breed': 'border collie', 'dog_extra_information': 'lives close by'}]Pydantic example​We can also use a Pydantic schema to choose the required properties and types and we will set as 'Optional' those that are not strictly required.By using the create_extraction_chain_pydantic function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types.from typing import Optional, Listfrom pydantic import BaseModel, Fieldclass Properties(BaseModel): person_name: str person_height: int person_hair_color: str dog_breed: Optional[str] dog_name: Optional[str]chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek. """As we can see, we extracted the required entities and their properties in the required format:chain.run(inp) [Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed='labrador', dog_name='Frosty'), Properties(person_name='Claudia', | https://python.langchain.com/docs/modules/chains/additional/extraction |
26005730e786-7 | dog_name='Frosty'), Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]PreviousElasticsearch databaseNextFLAREExtracting entitiesSeveral entity typesUnrelated entitiesExtra info for an entityPydantic exampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/extraction |
f40fdcddde81-0 | Neptune Open Cypher QA Chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/neptune_cypher_qa |
f40fdcddde81-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalNeptune Open Cypher QA ChainNeptune Open Cypher QA ChainThis QA chain queries Neptune graph database using openCypher and returns human readable responsefrom langchain.graphs.neptune_graph import NeptuneGraphhost = "<neptune-host>"port = 80use_https = Falsegraph = NeptuneGraph(host=host, port=port, use_https=use_https)from langchain.chat_models import ChatOpenAIfrom langchain.chains.graph_qa.neptune_cypher import NeptuneOpenCypherQAChainllm = ChatOpenAI(temperature=0, model="gpt-4")chain = NeptuneOpenCypherQAChain.from_llm(llm=llm, graph=graph)chain.run("how many outgoing routes does the Austin airport have?")PreviousDynamically selecting | https://python.langchain.com/docs/modules/chains/additional/neptune_cypher_qa |
f40fdcddde81-2 | many outgoing routes does the Austin airport have?")PreviousDynamically selecting from multiple retrieversNextRetrieval QA using OpenAI functionsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/neptune_cypher_qa |
def6b663bc63-0 | Causal program-aided language (CPAL) chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalCausal program-aided language (CPAL) chainOn this pageCausal program-aided language (CPAL) chainThe CPAL chain builds on the recent PAL to stop LLM hallucination. The problem with the PAL approach is that it hallucinates on a math problem with a nested chain of dependence. The innovation here is that this new CPAL approach includes causal structure to fix hallucination.The original PR's description contains a full overview.Using the CPAL chain, the LLM translated this"Tim buys the same number of pets as Cindy and Boris.""Cindy buys the same number of pets as Bill plus Bob.""Boris buys the same number of pets as Ben plus Beth.""Bill buys the same number of pets as Obama.""Bob buys the same number of pets as | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-2 | Beth.""Bill buys the same number of pets as Obama.""Bob buys the same number of pets as Obama.""Ben buys the same number of pets as Obama.""Beth buys the same number of pets as Obama.""If Obama buys one pet, how many pets total does everyone buy?"into this.Outline of code examples demoed in this notebook.CPAL's value against hallucination: CPAL vs PAL1.1 Complex narrative1.2 Unanswerable math word problem CPAL's three types of causal diagrams (The Book of Why).2.1 Mediator2.2 Collider2.3 Confounder from IPython.display import SVGfrom langchain.experimental.cpal.base import CPALChainfrom langchain.chains import PALChainfrom langchain import OpenAIllm = OpenAI(temperature=0, max_tokens=512)cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)pal_chain = PALChain.from_math_prompt(llm=llm, verbose=True)CPAL's value against hallucination: CPAL vs PAL​Like PAL, CPAL intends to reduce large language model (LLM) hallucination.The CPAL chain is different from the PAL chain for a couple of reasons.CPAL adds a causal structure (or DAG) to link entity actions (or math expressions). | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-3 | The CPAL math expressions are modeling a chain of cause and effect relations, which can be intervened upon, whereas for the PAL chain math expressions are projected math identities.1.1 Complex narrative​Takeaway: PAL hallucinates, CPAL does not hallucinate.question = ( "Tim buys the same number of pets as Cindy and Boris." "Cindy buys the same number of pets as Bill plus Bob." "Boris buys the same number of pets as Ben plus Beth." "Bill buys the same number of pets as Obama." "Bob buys the same number of pets as Obama." "Ben buys the same number of pets as Obama." "Beth buys the same number of pets as Obama." "If Obama buys one pet, how many pets total does everyone buy?")pal_chain.run(question) > Entering new chain... def solution(): """Tim buys the same number of pets as Cindy and Boris.Cindy buys the same number of pets as Bill plus Bob.Boris buys the same number of pets as Ben plus Beth.Bill buys the same number of pets as Obama.Bob buys the same number of pets as Obama.Ben buys the same number of pets as Obama.Beth buys the same number of pets as Obama.If Obama buys one pet, how many pets total does everyone buy?""" obama_pets = 1 tim_pets = obama_pets cindy_pets = obama_pets + obama_pets boris_pets = obama_pets + obama_pets total_pets = tim_pets + | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-4 | + obama_pets total_pets = tim_pets + cindy_pets + boris_pets result = total_pets return result > Finished chain. '5'cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 obama pass 1.0 [] 1 bill bill.value = obama.value 1.0 [obama] 2 bob bob.value = obama.value 1.0 [obama] 3 ben ben.value = obama.value 1.0 [obama] 4 beth beth.value = obama.value 1.0 | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-5 | beth.value = obama.value 1.0 [obama] 5 cindy cindy.value = bill.value + bob.value 2.0 [bill, bob] 6 boris boris.value = ben.value + beth.value 2.0 [ben, beth] 7 tim tim.value = cindy.value + boris.value 4.0 [cindy, boris] query data { "question": "how many pets total does everyone buy?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 13.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg")  Unanswerable math​Takeaway: PAL hallucinates, where CPAL, rather than hallucinate, answers with "unanswerable, narrative question and plot are incoherent"question = ( "Jan has three times the number of pets as Marcia." "Marcia has two more pets than Cindy." "If Cindy has ten pets, how many pets does Barak have?")pal_chain.run(question) > Entering new chain... def solution(): """Jan has three times the | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-6 | def solution(): """Jan has three times the number of pets as Marcia.Marcia has two more pets than Cindy.If Cindy has ten pets, how many pets does Barak have?""" cindy_pets = 10 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 result = jan_pets return result > Finished chain. '36'try: cpal_chain.run(question)except Exception as e_msg: print(e_msg) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 10.0 [] 1 marcia marcia.value = cindy.value + 2 12.0 [cindy] 2 jan jan.value = marcia.value * 3 36.0 [marcia] query data { "question": "how many pets does barak have?", "expression": "SELECT name, value FROM df WHERE name = | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-7 | "expression": "SELECT name, value FROM df WHERE name = 'barak'", "llm_error_msg": "" } unanswerable, query and outcome are incoherent outcome: name code value depends_on 0 cindy pass 10.0 [] 1 marcia marcia.value = cindy.value + 2 12.0 [cindy] 2 jan jan.value = marcia.value * 3 36.0 [marcia] query: {'question': 'how many pets does barak have?', 'expression': "SELECT name, value FROM df WHERE name = 'barak'", 'llm_error_msg': ''}Basic math​Causal mediator​question = ( "Jan has three times the number of pets as Marcia. " "Marcia has two more pets than Cindy. " "If Cindy has four pets, how many total pets do the three have?")PALpal_chain.run(question) > Entering new chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-8 | three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28'CPALcpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 4.0 [] 1 marcia marcia.value = cindy.value + 2 6.0 [cindy] 2 jan jan.value = marcia.value * 3 18.0 [marcia] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-9 | "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 28.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg")  Causal collider​question = ( "Jan has the number of pets as Marcia plus the number of pets as Cindy. " "Marcia has no pets. " "If Cindy has four pets, how many total pets do the three have?")cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 marcia pass 0.0 [] 1 cindy pass 4.0 [] 2 jan jan.value = marcia.value + cindy.value | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-10 | jan jan.value = marcia.value + cindy.value 4.0 [marcia, cindy] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 8.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg")  Causal confounder​question = ( "Jan has the number of pets as Marcia plus the number of pets as Cindy. " "Marcia has two more pets than Cindy. " "If Cindy has four pets, how many total pets do the three have?")cpal_chain.run(question) > Entering new chain... story outcome data name code value depends_on 0 cindy pass 4.0 [] 1 marcia | https://python.langchain.com/docs/modules/chains/additional/cpal |
def6b663bc63-11 | [] 1 marcia marcia.value = cindy.value + 2 6.0 [cindy] 2 jan jan.value = cindy.value + marcia.value 10.0 [cindy, marcia] query data { "question": "how many total pets do the three have?", "expression": "SELECT SUM(value) FROM df", "llm_error_msg": "" } > Finished chain. 20.0# wait 20 secs to see displaycpal_chain.draw(path="web.svg")SVG("web.svg")  %autoreload 2PreviousSelf-critique chain with constitutional AINextElasticsearch databaseCPAL's value against hallucination: CPAL vs PAL1.1 Complex narrativeUnanswerable mathBasic mathCausal colliderCausal confounderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/cpal |
5c135c2cbb8d-0 | Moderation | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalModerationModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-2 | moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this walkthrough.We'll show:How to run any piece of text through a moderation chain.How to append a Moderation chain to an LLMChain.from langchain.llms import OpenAIfrom langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChainfrom langchain.prompts import PromptTemplateHow to use the moderation chain​Here's an example of using the moderation chain with default settings (will return a string explaining stuff was flagged).moderation_chain = OpenAIModerationChain()moderation_chain.run("This is okay") 'This is okay'moderation_chain.run("I will kill you") "Text was found that violates OpenAI's content policy."Here's an example of using the moderation chain to throw an error.moderation_chain_error = OpenAIModerationChain(error=True)moderation_chain_error.run("This is okay") 'This is okay'moderation_chain_error.run("I will kill you") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 1 ----> 1 moderation_chain_error.run("I will kill you") File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs) 136 | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-3 | *args, **kwargs) 136 if len(args) != 1: 137 raise ValueError("`run` supports only one positional argument.") --> 138 return self(args[0])[self.output_keys[0]] 140 if kwargs and not args: 141 return self(kwargs)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs) 108 if self.verbose: 109 print( 110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m" 111 ) --> 112 outputs = self._call(inputs) 113 if self.verbose: 114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m") File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs) 79 text = inputs[self.input_key] 80 results = self.client.create(text) ---> 81 output = self._moderate(text, results["results"][0]) 82 return {self.output_key: | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-4 | 82 return {self.output_key: output} File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results) 71 error_str = "Text was found that violates OpenAI's content policy." 72 if self.error: ---> 73 raise ValueError(error_str) 74 else: 75 return error_str ValueError: Text was found that violates OpenAI's content policy.Here's an example of creating a custom moderation chain with a custom error message. It requires some knowledge of OpenAI's moderation endpoint results (see docs here).class CustomModeration(OpenAIModerationChain): def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = f"The following text was found that violates OpenAI's content policy: {text}" return error_str return text custom_moderation = CustomModeration()custom_moderation.run("This is okay") 'This is okay'custom_moderation.run("I will kill you") "The following text was found that violates OpenAI's content policy: I will kill you"How to append a Moderation chain to an LLMChain​To easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction.Let's start with a simple example of where the LLMChain only has a single | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-5 | abstraction.Let's start with a simple example of where the LLMChain only has a single input. For this purpose, we will prompt the model so it says something harmful.prompt = PromptTemplate(template="{text}", input_variables=["text"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)text = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1: I will kill youPerson 2:"""llm_chain.run(text) ' I will kill you'chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])chain.run(text) "Text was found that violates OpenAI's content policy."Now let's walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can't use the SimpleSequentialChain)prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"])llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt)setup = """We are playing a game of repeat after me.Person 1: HiPerson 2: HiPerson 1: How's your dayPerson 2: How's your dayPerson 1:"""new_input = "I will kill you"inputs = {"setup": setup, "new_input": new_input}llm_chain(inputs, return_only_outputs=True) {'text': ' I will kill you'}# Setting the input/output keys so it lines upmoderation_chain.input_key = "text"moderation_chain.output_key = "sanitized_text"chain = SequentialChain(chains=[llm_chain, moderation_chain], | https://python.langchain.com/docs/modules/chains/additional/moderation |
5c135c2cbb8d-6 | = "sanitized_text"chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])chain(inputs, return_only_outputs=True) {'sanitized_text': "Text was found that violates OpenAI's content policy."}PreviousLLM Symbolic MathNextDynamically selecting from multiple promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/moderation |
fc52e22e3537-0 | HTTP request chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/llm_requests |
fc52e22e3537-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalHTTP request chainHTTP request chainUsing the request library to get HTML results from a URL and then an LLM to parse resultsfrom langchain.llms import OpenAIfrom langchain.chains import LLMRequestsChain, LLMChainfrom langchain.prompts import PromptTemplatetemplate = """Between >>> and <<< are the raw search result text from google.Extract the answer to the question '{query}' or say "not found" if the information is not contained.Use the formatExtracted:<answer or "not found">>>> {requests_result} <<<Extracted:"""PROMPT = PromptTemplate( input_variables=["query", "requests_result"], template=template,)chain = | https://python.langchain.com/docs/modules/chains/additional/llm_requests |
fc52e22e3537-2 | input_variables=["query", "requests_result"], template=template,)chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))question = "What are the Three (3) biggest countries, and their respective sizes?"inputs = { "query": question, "url": "https://www.google.com/search?q=" + question.replace(" ", "+"),}chain(inputs) {'query': 'What are the Three (3) biggest countries, and their respective sizes?', 'url': 'https://www.google.com/search?q=What+are+the+Three+(3)+biggest+countries,+and+their+respective+sizes?', 'output': ' Russia (17,098,242 km²), Canada (9,984,670 km²), United States (9,826,675 km²)'}PreviousMath chainNextSummarization checker chainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/llm_requests |
1150003fcab1-0 | Graph QA | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/graph_qa |
1150003fcab1-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalGraph QAOn this pageGraph QAThis notebook goes over how to do question answering over a graph data structure.Create the graph​In this section, we construct an example graph. At the moment, this works best for small pieces of text.from langchain.indexes import GraphIndexCreatorfrom langchain.llms import OpenAIfrom langchain.document_loaders import TextLoaderindex_creator = GraphIndexCreator(llm=OpenAI(temperature=0))with open("../../state_of_the_union.txt") as f: all_text = f.read()We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment.text = "\n".join(all_text.split("\n\n")[105:108])text 'It | https://python.langchain.com/docs/modules/chains/additional/graph_qa |
1150003fcab1-2 | 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,� the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site�. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. 'graph = index_creator.from_text(text)We can inspect the created graph.graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]Querying the graph​We can now use the graph QA chain to ask question of the graphfrom langchain.chains import GraphQAChainchain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True)chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. | https://python.langchain.com/docs/modules/chains/additional/graph_qa |
1150003fcab1-3 | Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.'Save the graph​We can also save and load the graph.graph.write_to_gml("graph.gml")from langchain.indexes.graph import NetworkxEntityGraphloaded_graph = NetworkxEntityGraph.from_gml("graph.gml")loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')]PreviousNebulaGraphQAChainNextGraphSparqlQAChainCreate the graphQuerying the graphSave the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/graph_qa |
78d5f8a7d205-0 | Tagging | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/tagging |
78d5f8a7d205-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalTaggingOn this pageTaggingThe tagging chain uses the OpenAI functions parameter to specify a schema to tag a document with. This helps us make sure that the model outputs exactly tags that we want, with their appropriate types.The tagging chain is to be used when we want to tag a passage with a specific attribute (i.e. what is the sentiment of this message?)from langchain.chat_models import ChatOpenAIfrom langchain.chains import create_tagging_chain, create_tagging_chain_pydanticfrom langchain.prompts import ChatPromptTemplate /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of | https://python.langchain.com/docs/modules/chains/additional/tagging |
78d5f8a7d205-2 | UserWarning: A newer version of deeplake (3.6.4) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn(llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")Simplest approach, only specifying type​We can start by specifying a few properties with their expected type in our schemaschema = { "properties": { "sentiment": {"type": "string"}, "aggressiveness": {"type": "integer"}, "language": {"type": "string"}, }}chain = create_tagging_chain(schema, llm)As we can see in the examples, it correctly interprets what we want but the results vary so that we get, for example, sentiments in different languages ('positive', 'enojado' etc.).We will see how to control these results in the next section.inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'sentiment': 'positive', 'language': 'Spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) {'sentiment': 'enojado', 'aggressiveness': 1, 'language': 'Spanish'}inp = "Weather is ok here, I can go outside without much more than a coat"chain.run(inp) {'sentiment': 'positive', 'aggressiveness': 0, 'language': 'English'}More control​By being smart about how we define our schema we can have more control over the model's output. Specifically | https://python.langchain.com/docs/modules/chains/additional/tagging |
78d5f8a7d205-3 | being smart about how we define our schema we can have more control over the model's output. Specifically we can define:possible values for each propertydescription to make sure that the model understands the propertyrequired properties to be returnedFollowing is an example of how we can use enum, description and required to control for each of the previously mentioned aspects:schema = { "properties": { "sentiment": {"type": "string", "enum": ["happy", "neutral", "sad"]}, "aggressiveness": { "type": "integer", "enum": [1, 2, 3, 4, 5], "description": "describes how aggressive the statement is, the higher the number the more aggressive", }, "language": { "type": "string", "enum": ["spanish", "english", "french", "german", "italian"], }, }, "required": ["language", "sentiment", "aggressiveness"],}chain = create_tagging_chain(schema, llm)Now the answers are much better!inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"chain.run(inp) {'sentiment': 'happy', 'aggressiveness': 0, 'language': 'spanish'}inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"chain.run(inp) | https://python.langchain.com/docs/modules/chains/additional/tagging |
78d5f8a7d205-4 | con vos! Te voy a dar tu merecido!"chain.run(inp) {'sentiment': 'sad', 'aggressiveness': 10, 'language': 'spanish'}inp = "Weather is ok here, I can go outside without much more than a coat"chain.run(inp) {'sentiment': 'neutral', 'aggressiveness': 0, 'language': 'english'}Specifying schema with Pydantic​We can also use a Pydantic schema to specify the required properties and types. We can also send other arguments, such as 'enum' or 'description' as can be seen in the example below.By using the create_tagging_chain_pydantic function, we can send a Pydantic schema as input and the output will be an instantiated object that respects our desired schema. In this way, we can specify our schema in the same manner that we would a new class or function in Python - with purely Pythonic types.from enum import Enumfrom pydantic import BaseModel, Fieldclass Tags(BaseModel): sentiment: str = Field(..., enum=["happy", "neutral", "sad"]) aggressiveness: int = Field( ..., description="describes how aggressive the statement is, the higher the number the more aggressive", enum=[1, 2, 3, 4, 5], ) language: str = Field( ..., enum=["spanish", "english", "french", "german", "italian"] )chain = create_tagging_chain_pydantic(Tags, llm)inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"res = chain.run(inp)res | https://python.langchain.com/docs/modules/chains/additional/tagging |
78d5f8a7d205-5 | vos! Te voy a dar tu merecido!"res = chain.run(inp)res Tags(sentiment='sad', aggressiveness=10, language='spanish')PreviousDocument QANextVector store-augmented text generationSimplest approach, only specifying typeMore controlSpecifying schema with PydanticCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/tagging |
8b28790a9793-0 | Vector store-augmented text generation | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalVector store-augmented text generationOn this pageVector store-augmented text generationThis notebook walks through how to use LangChain for text generation over a vector index. This is useful if we want to generate text that is able to draw from a large body of custom text, for example, generating blog posts that have an understanding of previous blog posts written, or product tutorials that can refer to product documentation.Prepare Data​First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.from langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-2 | langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-3 | yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(Document(page_content=chunk, metadata=source.metadata)) Cloning into '.'...Set Up Vector DB​Now that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.search_index = Chroma.from_documents(source_chunks, OpenAIEmbeddings())Set Up LLM Chain with Custom Prompt​Next, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate Text​Finally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-4 | as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables.\n\nUsing `Deno.env` is simple. It has getter and setter methods, so you can easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file. This is a great'}, {'text': '\n\nEnvironment variables are a powerful tool for managing configuration settings in a program. They allow us to set values that can be used by the program, without having to hard-code them into the code. This makes it easier to change settings without having to modify the code.\n\nIn Deno, environment variables can be set in a few different ways. The most common way is to use the `VAR=value` syntax. This will set the environment variable `VAR` to the value `value`. This can be used to set any number of environment variables before | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-5 | to the value `value`. This can be used to set any number of environment variables before running a command. For example, if we wanted to set the environment variable `VAR` to `hello` before running a Deno command, we could do so like this:\n\n```\nVAR=hello deno run main.ts\n```\n\nThis will set the environment variable `VAR` to `hello` before running the command. We can then access this variable in our code using the `Deno.env.get()` function. For example, if we ran the following command:\n\n```\nVAR=hello && deno eval "console.log(\'Deno: \' + Deno.env.get(\'VAR'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without having to hard-code it into their applications. In Deno, you can access environment variables using the `Deno.env.get()` function.\n\nFor example, if you wanted to access the `HOME` environment variable, you could do so like this:\n\n```js\n// env.js\nDeno.env.get("HOME");\n```\n\nWhen running this code, you\'ll need to grant the Deno process access to environment variables. This can be done by passing the `--allow-env` flag to the `deno run` command. You can also specify which environment variables you want to grant access to, like this:\n\n```shell\n# Allow access to only the HOME env var\ndeno run --allow-env=HOME env.js\n```\n\nIt\'s important to note that environment variables are case insensitive on Windows, so Deno also matches them case insensitively (on Windows only).\n\nAnother thing to be aware of when using environment variables is subprocess permissions. Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
8b28790a9793-6 | Subprocesses are powerful and can access system resources regardless of the permissions you granted to the Den'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Deno is a secure JavaScript and TypeScript runtime built on the V8 JavaScript engine, and it recently added support for environment variables. This feature was added in Deno version 1.6.0, and it is now available for use in Deno applications.\n\nEnvironment variables are used to store information that can be used by programs. They are typically used to store configuration information, such as the location of a database or the name of a user. In Deno, environment variables are stored in the `Deno.env` object. This object is similar to the `process.env` object in Node.js, and it allows you to access and set environment variables.\n\nThe `Deno.env` object is a read-only object, meaning that you cannot directly modify the environment variables. Instead, you must use the `Deno.env.set()` function to set environment variables. This function takes two arguments: the name of the environment variable and the value to set it to. For example, if you wanted to set the `FOO` environment variable to `bar`, you would use the following code:\n\n```'}]PreviousTaggingNextMemoryPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate TextCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation |
1a5c50fee6a8-0 | Analyze Document | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/analyze_document |
1a5c50fee6a8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalAnalyze DocumentAnalyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.with open("../../state_of_the_union.txt") as f: state_of_the_union = f.read()Summarize​Let's take a look at it in action below, using it summarize a long document.from langchain import OpenAIfrom langchain.chains.summarize import load_summarize_chainllm = OpenAI(temperature=0)summary_chain = load_summarize_chain(llm, chain_type="map_reduce")from langchain.chains import AnalyzeDocumentChainsummarize_document_chain = | https://python.langchain.com/docs/modules/chains/additional/analyze_document |
1a5c50fee6a8-2 | langchain.chains import AnalyzeDocumentChainsummarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)summarize_document_chain.run(state_of_the_union) " In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America."Question Answering​Let's take a look at this using a question answering chain.from langchain.chains.question_answering import load_qa_chainqa_chain = load_qa_chain(llm, chain_type="map_reduce")qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.'PreviousAdditionalNextSelf-critique chain with constitutional AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/analyze_document |
df6f8cf2f7a8-0 | NebulaGraphQAChain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/graph_nebula_qa |
df6f8cf2f7a8-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalNebulaGraphQAChainOn this pageNebulaGraphQAChainThis notebook shows how to use LLMs to provide a natural language interface to NebulaGraph database.You will need to have a running NebulaGraph cluster, for which you can run a containerized cluster by running the following script:curl -fsSL nebula-up.siwei.io/install.sh | bashOther options are:Install as a Docker Desktop Extension. See hereNebulaGraph Cloud Service. See hereDeploy from package, source code, or via Kubernetes. See hereOnce the cluster is running, we could create the SPACE and SCHEMA for the database.# connect ngql jupyter extension to nebulagraph# create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, | https://python.langchain.com/docs/modules/chains/additional/graph_nebula_qa |
df6f8cf2f7a8-2 | create a new space%ngql CREATE SPACE IF NOT EXISTS langchain(partition_num=1, replica_factor=1, vid_type=fixed_string(128));# Wait for a few seconds for the space to be created.%ngql USE langchain;Create the schema, for full dataset, refer here.CREATE TAG IF NOT EXISTS movie(name string);CREATE TAG IF NOT EXISTS person(name string, birthdate string);CREATE EDGE IF NOT EXISTS acted_in();CREATE TAG INDEX IF NOT EXISTS person_index ON person(name(128));CREATE TAG INDEX IF NOT EXISTS movie_index ON movie(name(128));Wait for schema creation to complete, then we can insert some data.INSERT VERTEX person(name, birthdate) VALUES "Al Pacino":("Al Pacino", "1940-04-25");INSERT VERTEX movie(name) VALUES "The Godfather II":("The Godfather II");INSERT VERTEX movie(name) VALUES "The Godfather Coda: The Death of Michael Corleone":("The Godfather Coda: The Death of Michael Corleone");INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather II":();INSERT EDGE acted_in() VALUES "Al Pacino"->"The Godfather Coda: The Death of Michael Corleone":(); UsageError: Cell magic `%%ngql` not found.from langchain.chat_models import ChatOpenAIfrom langchain.chains import NebulaGraphQAChainfrom langchain.graphs import NebulaGraphgraph = NebulaGraph( space="langchain", username="root", password="nebula", address="127.0.0.1", port=9669, session_pool_size=30,)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate nGQL statements.# | https://python.langchain.com/docs/modules/chains/additional/graph_nebula_qa |
df6f8cf2f7a8-3 | the schema of database changes, you can refresh the schema information needed to generate nGQL statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [{'tag': 'movie', 'properties': [('name', 'string')]}, {'tag': 'person', 'properties': [('name', 'string'), ('birthdate', 'string')]}] Edge properties: [{'edge': 'acted_in', 'properties': []}] Relationships: ['(:person)-[:acted_in]->(:movie)'] Querying the graph​We can now use the graph cypher QA chain to ask question of the graphchain = NebulaGraphQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather II?") > Entering new NebulaGraphQAChain chain... Generated nGQL: MATCH (p:`person`)-[:acted_in]->(m:`movie`) WHERE m.`movie`.`name` == 'The Godfather II' RETURN p.`person`.`name` Full Context: {'p.person.name': ['Al Pacino']} > Finished chain. 'Al Pacino played in The Godfather II.'PreviousKuzuQAChainNextGraph QARefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/graph_nebula_qa |
cac36929acf4-0 | FLARE | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalFLAREOn this pageFLAREThis notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).Please see the original repo here.The basic idea is:Start answering a questionIf you start generating tokens the model is uncertain about, look up relevant documentsUse those documents to continue generatingRepeat until finishedThere is a lot of cool detail in how the lookup of relevant documents is done. | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-2 | Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is Joe Biden went to Harvard, and the tokens the model was uncertain about was Harvard, then a good generated question would be where did Joe Biden go to college. This generated question is then used in a retrieval step to fetch relevant documents.In order to set up this chain, we will need three things:An LLM to generate the answerAn LLM to generate hypothetical questions to use in retrievalA retriever to use to look up answers forThe LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.The retriever can be anything. In this notebook we will use SERPER search engine, because it is cheap.Other important parameters to understand:max_generation_len: The maximum number of tokens to generate before stopping to check if any are uncertainmin_prob: Any tokens generated with probability below this will be considered uncertainImports​import osos.environ["SERPER_API_KEY"] = ""os.environ["OPENAI_API_KEY"] = ""import reimport numpy as npfrom langchain.schema import BaseRetrieverfrom langchain.callbacks.manager import ( AsyncCallbackManagerForRetrieverRun, CallbackManagerForRetrieverRun,)from langchain.utilities import GoogleSerperAPIWrapperfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chat_models import ChatOpenAIfrom langchain.llms import OpenAIfrom langchain.schema import Documentfrom typing import Any, | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-3 | langchain.llms import OpenAIfrom langchain.schema import Documentfrom typing import Any, ListRetriever​class SerperSearchRetriever(BaseRetriever): search: GoogleSerperAPIWrapper = None def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **kwargs: Any ) -> List[Document]: return [Document(page_content=self.search.run(query))] async def _aget_relevant_documents( self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun, **kwargs: Any, ) -> List[Document]: raise NotImplementedError()retriever = SerperSearchRetriever(search=GoogleSerperAPIWrapper())FLARE Chain​# We set this so we can see what exactly is going onimport langchainlangchain.verbose = Truefrom langchain.chains import FlareChainflare = FlareChain.from_llm( ChatOpenAI(temperature=0), retriever=retriever, max_generation_len=164, min_prob=0.3,)query = "explain in great detail the difference between the langchain framework and baby agi"flare.run(query) > Entering new FlareChain chain... Current Response: Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-4 | you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> RESPONSE: > Entering new QuestionGeneratorChain chain... Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-5 | a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " uses a blockchain" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-6 | tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " distributed ledger to" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-7 | Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " set of tools" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-8 | the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " help developers create" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-9 | is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " create an AI system" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> EXISTING PARTIAL RESPONSE: The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications. Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing. In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for The question to which the answer is the term/entity/phrase " NLP applications" is: | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-10 | to which the answer is the term/entity/phrase " NLP applications" is: > Finished chain. Generated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?'] > Entering new _OpenAIResponseChain chain... Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-11 | very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ... LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-12 | or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain. LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-13 | simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology. Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger� that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain. LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-14 | such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ... LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-15 | of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023 At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain� together different components to create more advanced use cases around LLMs. >>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi >>> | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-16 | explain in great detail the difference between the langchain framework and baby agi >>> RESPONSE: > Finished chain. > Finished chain. ' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. 'llm = OpenAI()llm(query) '\n\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains�. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\n\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\n\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'flare.run("how | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-17 | is a more advanced AI system that can be used to create more complex applications.'flare.run("how are the origin stories of langchain and bitcoin similar or different?") > Entering new FlareChain chain... Current Response: Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> RESPONSE: > Entering new QuestionGeneratorChain chain... Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " very different origin" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-18 | >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " 2020 by a" is: Prompt after formatting: Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase: >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> EXISTING PARTIAL RESPONSE: Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications. FINISHED The question to which the answer is the term/entity/phrase " developers as a platform for creating and managing decentralized language learning applications." is: > Finished chain. Generated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?'] > | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-19 | the purpose of creating Langchain?'] > Entering new _OpenAIResponseChain chain... Prompt after formatting: Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED. >>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men – meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ... After all these | https://python.langchain.com/docs/modules/chains/additional/flare |
cac36929acf4-20 | investing and trading is the regularity with which ... After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain� together different components to create more advanced use cases around LLMs. >>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different? >>> RESPONSE: > Finished chain. > Finished chain. ' The origin stories of LangChain and Bitcoin are quite different. Bitcoin was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency, while LangChain is a framework built around LLMs. 'PreviousExtractionNextArangoDB QA chainImportsRetrieverFLARE ChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/flare |
089ad6b802c4-0 | Dynamically selecting from multiple retrievers | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/multi_retrieval_qa_router |
089ad6b802c4-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalDynamically selecting from multiple retrieversDynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.from langchain.chains.router import MultiRetrievalQAChainfrom langchain.llms import OpenAIfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.document_loaders import TextLoaderfrom langchain.vectorstores import FAISSsou_docs = TextLoader('../../state_of_the_union.txt').load_and_split()sou_retriever = | https://python.langchain.com/docs/modules/chains/additional/multi_retrieval_qa_router |
089ad6b802c4-2 | = TextLoader('../../state_of_the_union.txt').load_and_split()sou_retriever = FAISS.from_documents(sou_docs, OpenAIEmbeddings()).as_retriever()pg_docs = TextLoader('../../paul_graham_essay.txt').load_and_split()pg_retriever = FAISS.from_documents(pg_docs, OpenAIEmbeddings()).as_retriever()personal_texts = [ "I love apple pie", "My favorite color is fuchsia", "My dream is to become a professional dancer", "I broke my arm when I was 12", "My parents are from Peru",]personal_retriever = FAISS.from_texts(personal_texts, OpenAIEmbeddings()).as_retriever()retriever_infos = [ { "name": "state of the union", "description": "Good for answering questions about the 2023 State of the Union address", "retriever": sou_retriever }, { "name": "pg essay", "description": "Good for answering questions about Paul Graham's essay on his career", "retriever": pg_retriever }, { "name": "personal", "description": "Good for answering questions about me", "retriever": personal_retriever }]chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)print(chain.run("What did the president say about the economy?")) | https://python.langchain.com/docs/modules/chains/additional/multi_retrieval_qa_router |
089ad6b802c4-3 | did the president say about the economy?")) > Entering new MultiRetrievalQAChain chain... state of the union: {'query': 'What did the president say about the economy in the 2023 State of the Union address?'} > Finished chain. The president said that the economy was stronger than it had been a year prior, and that the American Rescue Plan helped create record job growth and fuel economic relief for millions of Americans. He also proposed a plan to fight inflation and lower costs for families, including cutting the cost of prescription drugs and energy, providing investments and tax credits for energy efficiency, and increasing access to child care and Pre-K.print(chain.run("What is something Paul Graham regrets about his work?")) > Entering new MultiRetrievalQAChain chain... pg essay: {'query': 'What is something Paul Graham regrets about his work?'} > Finished chain. Paul Graham regrets that he did not take a vacation after selling his company, instead of immediately starting to paint.print(chain.run("What is my background?")) > Entering new MultiRetrievalQAChain chain... personal: {'query': 'What is my background?'} > Finished chain. Your background is Peruvian.print(chain.run("What year was the Internet created in?")) > Entering new MultiRetrievalQAChain chain... None: {'query': 'What year was the Internet created in?'} > Finished chain. The Internet was created in 1969 through a project called ARPANET, which was funded by the United States Department of Defense. | https://python.langchain.com/docs/modules/chains/additional/multi_retrieval_qa_router |
089ad6b802c4-4 | through a project called ARPANET, which was funded by the United States Department of Defense. However, the World Wide Web, which is often confused with the Internet, was created in 1989 by British computer scientist Tim Berners-Lee.PreviousDynamically selecting from multiple promptsNextNeptune Open Cypher QA ChainCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/multi_retrieval_qa_router |
659f8da64dad-0 | HugeGraph QA Chain | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/graph_hugegraph_qa |
659f8da64dad-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalHugeGraph QA ChainOn this pageHugeGraph QA ChainThis notebook shows how to use LLMs to provide a natural language interface to HugeGraph database.You will need to have a running HugeGraph instance. | https://python.langchain.com/docs/modules/chains/additional/graph_hugegraph_qa |
659f8da64dad-2 | You can run a local docker container by running the executing the following script:docker run \ --name=graph \ -itd \ -p 8080:8080 \ hugegraph/hugegraphIf we want to connect HugeGraph in the application, we need to install python sdk:pip3 install hugegraph-pythonIf you are using the docker container, you need to wait a couple of second for the database to start, and then we need create schema and write graph data for the database.from hugegraph.connection import PyHugeGraphclient = PyHugeGraph("localhost", "8080", user="admin", pwd="admin", graph="hugegraph")First, we create the schema for a simple movie database:"""schema"""schema = client.schema()schema.propertyKey("name").asText().ifNotExist().create()schema.propertyKey("birthDate").asText().ifNotExist().create()schema.vertexLabel("Person").properties( "name", "birthDate").usePrimaryKeyId().primaryKeys("name").ifNotExist().create()schema.vertexLabel("Movie").properties("name").usePrimaryKeyId().primaryKeys( "name").ifNotExist().create()schema.edgeLabel("ActedIn").sourceLabel("Person").targetLabel( "Movie").ifNotExist().create() 'create EdgeLabel success, Detail: "b\'{"id":1,"name":"ActedIn","source_label":"Person","target_label":"Movie","frequency":"SINGLE","sort_keys":[],"nullable_keys":[],"index_labels":[],"properties":[],"status":"CREATED","ttl":0,"enable_label_index":true,"user_data":{"~create_time":"2023-07-04 10:48:47.908"}}\'"'Then we can insert some data."""graph"""g = client.graph()g.addVertex("Person", | https://python.langchain.com/docs/modules/chains/additional/graph_hugegraph_qa |
659f8da64dad-3 | we can insert some data."""graph"""g = client.graph()g.addVertex("Person", {"name": "Al Pacino", "birthDate": "1940-04-25"})g.addVertex("Person", {"name": "Robert De Niro", "birthDate": "1943-08-17"})g.addVertex("Movie", {"name": "The Godfather"})g.addVertex("Movie", {"name": "The Godfather Part II"})g.addVertex("Movie", {"name": "The Godfather Coda The Death of Michael Corleone"})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather", {})g.addEdge("ActedIn", "1:Al Pacino", "2:The Godfather Part II", {})g.addEdge( "ActedIn", "1:Al Pacino", "2:The Godfather Coda The Death of Michael Corleone", {})g.addEdge("ActedIn", "1:Robert De Niro", "2:The Godfather Part II", {}) 1:Robert De Niro--ActedIn-->2:The Godfather Part IICreating HugeGraphQAChain​We can now create the HugeGraph and HugeGraphQAChain. To create the HugeGraph we simply need to pass the database object to the HugeGraph constructor.from langchain.chat_models import ChatOpenAIfrom langchain.chains import HugeGraphQAChainfrom langchain.graphs import HugeGraphgraph = HugeGraph( username="admin", password="admin", address="localhost", port=8080, graph="hugegraph",)Refresh graph schema information​If the schema of database changes, you can refresh the schema information needed to generate Gremlin statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [name: | https://python.langchain.com/docs/modules/chains/additional/graph_hugegraph_qa |
659f8da64dad-4 | statements.# graph.refresh_schema()print(graph.get_schema) Node properties: [name: Person, primary_keys: ['name'], properties: ['name', 'birthDate'], name: Movie, primary_keys: ['name'], properties: ['name']] Edge properties: [name: ActedIn, properties: []] Relationships: ['Person--ActedIn-->Movie'] Querying the graph​We can now use the graph Gremlin QA chain to ask question of the graphchain = HugeGraphQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)chain.run("Who played in The Godfather?") > Entering new chain... Generated gremlin: g.V().has('Movie', 'name', 'The Godfather').in('ActedIn').valueMap(true) Full Context: [{'id': '1:Al Pacino', 'label': 'Person', 'name': ['Al Pacino'], 'birthDate': ['1940-04-25']}] > Finished chain. 'Al Pacino played in The Godfather.'PreviousGraph DB QA chainNextKuzuQAChainCreating HugeGraphQAChainRefresh graph schema informationQuerying the graphCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. | https://python.langchain.com/docs/modules/chains/additional/graph_hugegraph_qa |
a22db2b26326-0 | Dynamically selecting from multiple prompts | 🦜�🔗 Langchain | https://python.langchain.com/docs/modules/chains/additional/multi_prompt_router |
a22db2b26326-1 | Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAdditionalAnalyze DocumentSelf-critique chain with constitutional AICausal program-aided language (CPAL) chainElasticsearch databaseExtractionFLAREArangoDB QA chainGraph DB QA chainHugeGraph QA ChainKuzuQAChainNebulaGraphQAChainGraph QAGraphSparqlQAChainHypothetical Document EmbeddingsBash chainSelf-checking chainMath chainHTTP request chainSummarization checker chainLLM Symbolic MathModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversNeptune Open Cypher QA ChainRetrieval QA using OpenAI functionsOpenAPI chainOpenAPI calls with OpenAI functionsProgram-aided language model (PAL) chainQuestion-Answering CitationsDocument QATaggingVector store-augmented text generationMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesModulesChainsAdditionalDynamically selecting from multiple promptsDynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.from langchain.chains.router import MultiPromptChainfrom langchain.llms import OpenAIphysics_template = """You are a very smart physics professor. \You are great at answering questions about physics in a concise and easy to understand manner. \When you don't know the answer to a question you admit that you don't know.Here is a question:{input}"""math_template = """You are a | https://python.langchain.com/docs/modules/chains/additional/multi_prompt_router |
a22db2b26326-2 | that you don't know.Here is a question:{input}"""math_template = """You are a very good mathematician. You are great at answering math questions. \You are so good because you are able to break down hard problems into their component parts, \answer the component parts, and then put them together to answer the broader question.Here is a question:{input}"""prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template }]chain = MultiPromptChain.from_prompts(OpenAI(), prompt_infos, verbose=True)print(chain.run("What is black body radiation?")) > Entering new MultiPromptChain chain... physics: {'input': 'What is black body radiation?'} > Finished chain. Black body radiation is the emission of electromagnetic radiation from a body due to its temperature. It is a type of thermal radiation that is emitted from the surface of all objects that are at a temperature above absolute zero. It is a spectrum of radiation that is influenced by the temperature of the body and is independent of the composition of the emitting material.print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) > Entering new MultiPromptChain chain... math: {'input': 'What is the first prime number greater | https://python.langchain.com/docs/modules/chains/additional/multi_prompt_router |
a22db2b26326-3 | chain... math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'} > Finished chain. ? The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. To solve this problem, we can break down the question into two parts: finding the first prime number greater than 40, and then finding a number that is divisible by 3. The first step is to find the first prime number greater than 40. A prime number is a number that is only divisible by 1 and itself. The next prime number after 40 is 41. The second step is to find a number that is divisible by 3. To do this, we can add 1 to 41, which gives us 42. Now, we can check if 42 is divisible by 3. 42 divided by 3 is 14, so 42 is divisible by 3. Therefore, the answer to the question is 43.print(chain.run("What is the name of the type of cloud that rins")) > Entering new MultiPromptChain chain... None: {'input': 'What is the name of the type of cloud that rains?'} > Finished chain. The type of cloud that typically produces rain is called a cumulonimbus cloud. This type of cloud is characterized by its large vertical extent and can produce thunderstorms and heavy precipitation. Is there anything else you'd like to know?PreviousModerationNextDynamically selecting from multiple retrieversCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 | https://python.langchain.com/docs/modules/chains/additional/multi_prompt_router |
Subsets and Splits