id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 49
117
|
---|---|---|
4a178038f0cf-1 | aim_callback = AimCallbackHandler(
repo=".",
experiment_name="scenario 1: OpenAI LLM",
)
callbacks = [StdOutCallbackHandler(), aim_callback]
llm = OpenAI(temperature=0, callbacks=callbacks)
The flush_tracker function is used to record LangChain assets on Aim. By default, the session is reset rather than being terminated outright.
Scenario 1 In the first scenario, we will use OpenAI LLM.
# scenario 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
aim_callback.flush_tracker(
langchain_asset=llm,
experiment_name="scenario 2: Chain with multiple SubChains on multiple generations",
)
Scenario 2 Scenario two involves chaining with multiple SubChains across multiple generations.
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# scenario 2 - Chain
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)
test_prompts = [
{"title": "documentary about good video games that push the boundary of game design"},
{"title": "the phenomenon behind the remarkable speed of cheetahs"},
{"title": "the best in class mlops tooling"},
]
synopsis_chain.apply(test_prompts)
aim_callback.flush_tracker(
langchain_asset=synopsis_chain, experiment_name="scenario 3: Agent with Tools"
) | https://python.langchain.com/en/latest/integrations/aim_tracking.html |
4a178038f0cf-2 | )
Scenario 3 The third scenario involves an agent with tools.
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# scenario 3 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=callbacks,
)
agent.run(
"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?"
)
aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)
> Entering new AgentExecutor chain...
I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.
Action: Search
Action Input: "Leo DiCaprio girlfriend"
Observation: Leonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...
Thought: I need to find out Camila Morrone's age
Action: Search
Action Input: "Camila Morrone age"
Observation: 25 years
Thought: I need to calculate 25 raised to the 0.43 power
Action: Calculator
Action Input: 25^0.43
Observation: Answer: 3.991298452658078
Thought: I now know the final answer
Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.
> Finished chain.
previous
AI21 Labs
next
Airbyte
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/integrations/aim_tracking.html |
4a178038f0cf-3 | Airbyte
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/aim_tracking.html |
365d1779d6a3-0 | .md
.pdf
Microsoft Word
Contents
Installation and Setup
Document Loader
Microsoft Word#
Microsoft Word is a word processor developed by Microsoft.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import UnstructuredWordDocumentLoader
previous
Microsoft PowerPoint
next
Milvus
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/microsoft_word.html |
c30a20e916ab-0 | .ipynb
.pdf
ClearML
Contents
Installation and Setup
Getting API Credentials
Callbacks
Scenario 1: Just an LLM
Scenario 2: Creating an agent with tools
Tips and Next Steps
ClearML#
ClearML is a ML/DL development and production suite, it contains 5 main modules:
Experiment Manager - Automagical experiment tracking, environments and results
MLOps - Orchestration, Automation & Pipelines solution for ML/DL jobs (K8s / Cloud / bare-metal)
Data-Management - Fully differentiable data management & version control solution on top of object-storage (S3 / GS / Azure / NAS)
Model-Serving - cloud-ready Scalable model serving solution!
Deploy new model endpoints in under 5 minutes
Includes optimized GPU serving support backed by Nvidia-Triton
with out-of-the-box Model Monitoring
Fire Reports - Create and share rich MarkDown documents supporting embeddable online content
In order to properly keep track of your langchain experiments and their results, you can enable the ClearML integration. We use the ClearML Experiment Manager that neatly tracks and organizes all your experiment runs.
Installation and Setup#
!pip install clearml
!pip install pandas
!pip install textstat
!pip install spacy
!python -m spacy download en_core_web_sm
Getting API Credentials#
We’ll be using quite some APIs in this notebook, here is a list and where to get them:
ClearML: https://app.clear.ml/settings/workspace-configuration
OpenAI: https://platform.openai.com/account/api-keys
SerpAPI (google search): https://serpapi.com/dashboard
import os
os.environ["CLEARML_API_ACCESS_KEY"] = ""
os.environ["CLEARML_API_SECRET_KEY"] = ""
os.environ["OPENAI_API_KEY"] = ""
os.environ["SERPAPI_API_KEY"] = "" | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-1 | os.environ["SERPAPI_API_KEY"] = ""
Callbacks#
from langchain.callbacks import ClearMLCallbackHandler
from datetime import datetime
from langchain.callbacks import StdOutCallbackHandler
from langchain.llms import OpenAI
# Setup and use the ClearML Callback
clearml_callback = ClearMLCallbackHandler(
task_type="inference",
project_name="langchain_callback_demo",
task_name="llm",
tags=["test"],
# Change the following parameters based on the amount of detail you want tracked
visualize=True,
complexity_metrics=True,
stream_logs=True
)
callbacks = [StdOutCallbackHandler(), clearml_callback]
# Get the OpenAI model ready to go
llm = OpenAI(temperature=0, callbacks=callbacks)
The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.
Scenario 1: Just an LLM#
First, let’s just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML
# SCENARIO 1 - LLM
llm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)
# After every generation run, use flush to make sure all the metrics
# prompts and other output are properly saved separately
clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential") | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-2 | clearml_callback.flush_tracker(langchain_asset=llm, name="simple_sequential")
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-3 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}
{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-4 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-5 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-6 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-7 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-8 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-9 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\n\nRoses are red,\nViolets are blue,\nSugar is sweet,\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}
{'action_records': action name step starts ends errors text_ctr chain_starts \ | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-10 | 0 on_llm_start OpenAI 1 1 0 0 0 0
1 on_llm_start OpenAI 1 1 0 0 0 0
2 on_llm_start OpenAI 1 1 0 0 0 0
3 on_llm_start OpenAI 1 1 0 0 0 0
4 on_llm_start OpenAI 1 1 0 0 0 0
5 on_llm_start OpenAI 1 1 0 0 0 0
6 on_llm_end NaN 2 1 1 0 0 0
7 on_llm_end NaN 2 1 1 0 0 0
8 on_llm_end NaN 2 1 1 0 0 0
9 on_llm_end NaN 2 1 1 0 0 0
10 on_llm_end NaN 2 1 1 0 0 0
11 on_llm_end NaN 2 1 1 0 0 0
12 on_llm_start OpenAI 3 2 1 0 0 0
13 on_llm_start OpenAI 3 2 1 0 0 0 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-11 | 14 on_llm_start OpenAI 3 2 1 0 0 0
15 on_llm_start OpenAI 3 2 1 0 0 0
16 on_llm_start OpenAI 3 2 1 0 0 0
17 on_llm_start OpenAI 3 2 1 0 0 0
18 on_llm_end NaN 4 2 2 0 0 0
19 on_llm_end NaN 4 2 2 0 0 0
20 on_llm_end NaN 4 2 2 0 0 0
21 on_llm_end NaN 4 2 2 0 0 0
22 on_llm_end NaN 4 2 2 0 0 0
23 on_llm_end NaN 4 2 2 0 0 0
chain_ends llm_starts ... difficult_words linsear_write_formula \
0 0 1 ... NaN NaN
1 0 1 ... NaN NaN
2 0 1 ... NaN NaN
3 0 1 ... NaN NaN
4 0 1 ... NaN NaN
5 0 1 ... NaN NaN | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-12 | 5 0 1 ... NaN NaN
6 0 1 ... 0.0 5.5
7 0 1 ... 2.0 6.5
8 0 1 ... 0.0 5.5
9 0 1 ... 2.0 6.5
10 0 1 ... 0.0 5.5
11 0 1 ... 2.0 6.5
12 0 2 ... NaN NaN
13 0 2 ... NaN NaN
14 0 2 ... NaN NaN
15 0 2 ... NaN NaN
16 0 2 ... NaN NaN
17 0 2 ... NaN NaN
18 0 2 ... 0.0 5.5
19 0 2 ... 2.0 6.5
20 0 2 ... 0.0 5.5
21 0 2 ... 2.0 6.5
22 0 2 ... 0.0 5.5
23 0 2 ... 2.0 6.5
gunning_fog text_standard fernandez_huerta szigriszt_pazos \
0 NaN NaN NaN NaN | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-13 | 0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 5.20 5th and 6th grade 133.58 131.54
7 8.28 6th and 7th grade 115.58 112.37
8 5.20 5th and 6th grade 133.58 131.54
9 8.28 6th and 7th grade 115.58 112.37
10 5.20 5th and 6th grade 133.58 131.54
11 8.28 6th and 7th grade 115.58 112.37
12 NaN NaN NaN NaN
13 NaN NaN NaN NaN
14 NaN NaN NaN NaN
15 NaN NaN NaN NaN
16 NaN NaN NaN NaN
17 NaN NaN NaN NaN
18 5.20 5th and 6th grade 133.58 131.54
19 8.28 6th and 7th grade 115.58 112.37
20 5.20 5th and 6th grade 133.58 131.54 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-14 | 21 8.28 6th and 7th grade 115.58 112.37
22 5.20 5th and 6th grade 133.58 131.54
23 8.28 6th and 7th grade 115.58 112.37
gutierrez_polini crawford gulpease_index osman
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 62.30 -0.2 79.8 116.91
7 54.83 1.4 72.1 100.17
8 62.30 -0.2 79.8 116.91
9 54.83 1.4 72.1 100.17
10 62.30 -0.2 79.8 116.91
11 54.83 1.4 72.1 100.17
12 NaN NaN NaN NaN
13 NaN NaN NaN NaN
14 NaN NaN NaN NaN
15 NaN NaN NaN NaN
16 NaN NaN NaN NaN
17 NaN NaN NaN NaN
18 62.30 -0.2 79.8 116.91 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-15 | 19 54.83 1.4 72.1 100.17
20 62.30 -0.2 79.8 116.91
21 54.83 1.4 72.1 100.17
22 62.30 -0.2 79.8 116.91
23 54.83 1.4 72.1 100.17
[24 rows x 39 columns], 'session_analysis': prompt_step prompts name output_step \
0 1 Tell me a joke OpenAI 2
1 1 Tell me a poem OpenAI 2
2 1 Tell me a joke OpenAI 2
3 1 Tell me a poem OpenAI 2
4 1 Tell me a joke OpenAI 2
5 1 Tell me a poem OpenAI 2
6 3 Tell me a joke OpenAI 4
7 3 Tell me a poem OpenAI 4
8 3 Tell me a joke OpenAI 4
9 3 Tell me a poem OpenAI 4
10 3 Tell me a joke OpenAI 4
11 3 Tell me a poem OpenAI 4
output \
0 \n\nQ: What did the fish say when it hit the w...
1 \n\nRoses are red,\nViolets are blue,\nSugar i... | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-16 | 2 \n\nQ: What did the fish say when it hit the w...
3 \n\nRoses are red,\nViolets are blue,\nSugar i...
4 \n\nQ: What did the fish say when it hit the w...
5 \n\nRoses are red,\nViolets are blue,\nSugar i...
6 \n\nQ: What did the fish say when it hit the w...
7 \n\nRoses are red,\nViolets are blue,\nSugar i...
8 \n\nQ: What did the fish say when it hit the w...
9 \n\nRoses are red,\nViolets are blue,\nSugar i...
10 \n\nQ: What did the fish say when it hit the w...
11 \n\nRoses are red,\nViolets are blue,\nSugar i...
token_usage_total_tokens token_usage_prompt_tokens \
0 162 24
1 162 24
2 162 24
3 162 24
4 162 24
5 162 24
6 162 24
7 162 24
8 162 24
9 162 24
10 162 24
11 162 24
token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \
0 138 109.04 1.3
1 138 83.66 4.8 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-17 | 1 138 83.66 4.8
2 138 109.04 1.3
3 138 83.66 4.8
4 138 109.04 1.3
5 138 83.66 4.8
6 138 109.04 1.3
7 138 83.66 4.8
8 138 109.04 1.3
9 138 83.66 4.8
10 138 109.04 1.3
11 138 83.66 4.8
... difficult_words linsear_write_formula gunning_fog \
0 ... 0 5.5 5.20
1 ... 2 6.5 8.28
2 ... 0 5.5 5.20
3 ... 2 6.5 8.28
4 ... 0 5.5 5.20
5 ... 2 6.5 8.28
6 ... 0 5.5 5.20
7 ... 2 6.5 8.28
8 ... 0 5.5 5.20
9 ... 2 6.5 8.28
10 ... 0 5.5 5.20 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-18 | 10 ... 0 5.5 5.20
11 ... 2 6.5 8.28
text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \
0 5th and 6th grade 133.58 131.54 62.30
1 6th and 7th grade 115.58 112.37 54.83
2 5th and 6th grade 133.58 131.54 62.30
3 6th and 7th grade 115.58 112.37 54.83
4 5th and 6th grade 133.58 131.54 62.30
5 6th and 7th grade 115.58 112.37 54.83
6 5th and 6th grade 133.58 131.54 62.30
7 6th and 7th grade 115.58 112.37 54.83
8 5th and 6th grade 133.58 131.54 62.30
9 6th and 7th grade 115.58 112.37 54.83
10 5th and 6th grade 133.58 131.54 62.30
11 6th and 7th grade 115.58 112.37 54.83
crawford gulpease_index osman | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-19 | crawford gulpease_index osman
0 -0.2 79.8 116.91
1 1.4 72.1 100.17
2 -0.2 79.8 116.91
3 1.4 72.1 100.17
4 -0.2 79.8 116.91
5 1.4 72.1 100.17
6 -0.2 79.8 116.91
7 1.4 72.1 100.17
8 -0.2 79.8 116.91
9 1.4 72.1 100.17
10 -0.2 79.8 116.91
11 1.4 72.1 100.17
[12 rows x 24 columns]}
2023-03-29 14:00:25,948 - clearml.Task - INFO - Completed model upload to https://files.clear.ml/langchain_callback_demo/llm.988bd727b0e94a29a3ac0ee526813545/models/simple_sequential
At this point you can already go to https://app.clear.ml and take a look at the resulting ClearML Task that was created.
Among others, you should see that this notebook is saved along with any git information. The model JSON that contains the used parameters is saved as an artifact, there are also console logs and under the plots section, you’ll find tables that represent the flow of the chain.
Finally, if you enabled visualizations, these are stored as HTML files under debug samples. | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-20 | Finally, if you enabled visualizations, these are stored as HTML files under debug samples.
Scenario 2: Creating an agent with tools#
To show a more advanced workflow, let’s create an agent with access to tools. The way ClearML tracks the results is not different though, only the table will look slightly different as there are other types of actions taken when compared to the earlier, simpler example.
You can now also see the use of the finish=True keyword, which will fully close the ClearML Task, instead of just resetting the parameters and prompts for a new conversation.
from langchain.agents import initialize_agent, load_tools
from langchain.agents import AgentType
# SCENARIO 2 - Agent with Tools
tools = load_tools(["serpapi", "llm-math"], llm=llm, callbacks=callbacks)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=callbacks,
)
agent.run(
"Who is the wife of the person who sang summer of 69?"
)
clearml_callback.flush_tracker(langchain_asset=agent, name="Agent with Tools", finish=True)
> Entering new AgentExecutor chain...
{'action': 'on_chain_start', 'name': 'AgentExecutor', 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 0, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'input': 'Who is the wife of the person who sang summer of 69?'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-21 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought:'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-22 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 189, 'token_usage_completion_tokens': 34, 'token_usage_total_tokens': 223, 'model_name': 'text-davinci-003', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 91.61, 'flesch_kincaid_grade': 3.8, 'smog_index': 0.0, 'coleman_liau_index': 3.41, 'automated_readability_index': 3.5, 'dale_chall_readability_score': 6.06, 'difficult_words': 2, 'linsear_write_formula': 5.75, 'gunning_fog': 5.4, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 121.07, 'szigriszt_pazos': 119.5, 'gutierrez_polini': 54.91, 'crawford': 0.9, 'gulpease_index': 72.7, 'osman': 92.16} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-23 | I need to find out who sang summer of 69 and then find out who their wife is.
Action: Search
Action Input: "Who sang summer of 69"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who sang summer of 69', 'log': ' I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"', 'step': 4, 'starts': 3, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 1, 'tool_ends': 0, 'agent_ends': 0}
{'action': 'on_tool_start', 'input_str': 'Who sang summer of 69', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 5, 'starts': 4, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0}
Observation: Bryan Adams - Summer Of 69 (Official Music Video). | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-24 | Observation: Bryan Adams - Summer Of 69 (Official Music Video).
Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams - Summer Of 69 (Official Music Video).', 'step': 6, 'starts': 4, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-25 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 7, 'starts': 5, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought:'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-26 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 242, 'token_usage_completion_tokens': 28, 'token_usage_total_tokens': 270, 'model_name': 'text-davinci-003', 'step': 8, 'starts': 5, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'text': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 94.66, 'flesch_kincaid_grade': 2.7, 'smog_index': 0.0, 'coleman_liau_index': 4.73, 'automated_readability_index': 4.0, 'dale_chall_readability_score': 7.16, 'difficult_words': 2, 'linsear_write_formula': 4.25, 'gunning_fog': 4.2, 'text_standard': '4th and 5th grade', 'fernandez_huerta': 124.13, 'szigriszt_pazos': 119.2, 'gutierrez_polini': 52.26, 'crawford': 0.7, 'gulpease_index': 74.7, 'osman': 84.2}
I need to find out who Bryan Adams is married to.
Action: Search | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-27 | I need to find out who Bryan Adams is married to.
Action: Search
Action Input: "Who is Bryan Adams married to"{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who is Bryan Adams married to', 'log': ' I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"', 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 3, 'tool_ends': 1, 'agent_ends': 0}
{'action': 'on_tool_start', 'input_str': 'Who is Bryan Adams married to', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 10, 'starts': 7, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 1, 'agent_ends': 0}
Observation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ... | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-28 | Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...', 'step': 11, 'starts': 7, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-29 | {'action': 'on_llm_start', 'name': 'OpenAI', 'step': 12, 'starts': 8, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\n\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who is the wife of the person who sang summer of 69?\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\nAction: Search\nAction Input: "Who sang summer of 69"\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\nThought: I need to find out who Bryan Adams is married to.\nAction: Search\nAction Input: "Who is Bryan Adams married to"\nObservation: Bryan Adams has never married. In the 1990s, he was in | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-30 | Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...\nThought:'} | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-31 | {'action': 'on_llm_end', 'token_usage_prompt_tokens': 314, 'token_usage_completion_tokens': 18, 'token_usage_total_tokens': 332, 'model_name': 'text-davinci-003', 'step': 13, 'starts': 8, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'text': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 81.29, 'flesch_kincaid_grade': 3.7, 'smog_index': 0.0, 'coleman_liau_index': 5.75, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 7.37, 'difficult_words': 1, 'linsear_write_formula': 2.5, 'gunning_fog': 2.8, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 115.7, 'szigriszt_pazos': 110.84, 'gutierrez_polini': 49.79, 'crawford': 0.7, 'gulpease_index': 85.4, 'osman': 83.14}
I now know the final answer.
Final Answer: Bryan Adams has never been married. | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-32 | I now know the final answer.
Final Answer: Bryan Adams has never been married.
{'action': 'on_agent_finish', 'output': 'Bryan Adams has never been married.', 'log': ' I now know the final answer.\nFinal Answer: Bryan Adams has never been married.', 'step': 14, 'starts': 8, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}
> Finished chain.
{'action': 'on_chain_end', 'outputs': 'Bryan Adams has never been married.', 'step': 15, 'starts': 8, 'ends': 7, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 1, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}
{'action_records': action name step starts ends errors text_ctr \
0 on_llm_start OpenAI 1 1 0 0 0
1 on_llm_start OpenAI 1 1 0 0 0
2 on_llm_start OpenAI 1 1 0 0 0
3 on_llm_start OpenAI 1 1 0 0 0 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-33 | 4 on_llm_start OpenAI 1 1 0 0 0
.. ... ... ... ... ... ... ...
66 on_tool_end NaN 11 7 4 0 0
67 on_llm_start OpenAI 12 8 4 0 0
68 on_llm_end NaN 13 8 5 0 0
69 on_agent_finish NaN 14 8 6 0 0
70 on_chain_end NaN 15 8 7 0 0
chain_starts chain_ends llm_starts ... gulpease_index osman input \
0 0 0 1 ... NaN NaN NaN
1 0 0 1 ... NaN NaN NaN
2 0 0 1 ... NaN NaN NaN
3 0 0 1 ... NaN NaN NaN
4 0 0 1 ... NaN NaN NaN
.. ... ... ... ... ... ... ...
66 1 0 2 ... NaN NaN NaN
67 1 0 3 ... NaN NaN NaN
68 1 0 3 ... 85.4 83.14 NaN
69 1 0 3 ... NaN NaN NaN | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-34 | 69 1 0 3 ... NaN NaN NaN
70 1 1 3 ... NaN NaN NaN
tool tool_input log \
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
.. ... ... ...
66 NaN NaN NaN
67 NaN NaN NaN
68 NaN NaN NaN
69 NaN NaN I now know the final answer.\nFinal Answer: B...
70 NaN NaN NaN
input_str description output \
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
.. ... ... ...
66 NaN NaN Bryan Adams has never married. In the 1990s, h...
67 NaN NaN NaN
68 NaN NaN NaN
69 NaN NaN Bryan Adams has never been married.
70 NaN NaN NaN
outputs
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
.. ...
66 NaN
67 NaN
68 NaN
69 NaN
70 Bryan Adams has never been married.
[71 rows x 47 columns], 'session_analysis': prompt_step prompts name \
0 2 Answer the following questions as best you can... OpenAI | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-35 | 0 2 Answer the following questions as best you can... OpenAI
1 7 Answer the following questions as best you can... OpenAI
2 12 Answer the following questions as best you can... OpenAI
output_step output \
0 3 I need to find out who sang summer of 69 and ...
1 8 I need to find out who Bryan Adams is married...
2 13 I now know the final answer.\nFinal Answer: B...
token_usage_total_tokens token_usage_prompt_tokens \
0 223 189
1 270 242
2 332 314
token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \
0 34 91.61 3.8
1 28 94.66 2.7
2 18 81.29 3.7
... difficult_words linsear_write_formula gunning_fog \
0 ... 2 5.75 5.4
1 ... 2 4.25 4.2
2 ... 1 2.50 2.8
text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \
0 3rd and 4th grade 121.07 119.50 54.91
1 4th and 5th grade 124.13 119.20 52.26 | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
c30a20e916ab-36 | 2 3rd and 4th grade 115.70 110.84 49.79
crawford gulpease_index osman
0 0.9 72.7 92.16
1 0.7 74.7 84.20
2 0.7 85.4 83.14
[3 rows x 24 columns]}
Could not update last created model in Task 988bd727b0e94a29a3ac0ee526813545, Task status 'completed' cannot be updated
Tips and Next Steps#
Make sure you always use a unique name argument for the clearml_callback.flush_tracker function. If not, the model parameters used for a run will override the previous run!
If you close the ClearML Callback using clearml_callback.flush_tracker(..., finish=True) the Callback cannot be used anymore. Make a new one if you want to keep logging.
Check out the rest of the open source ClearML ecosystem, there is a data version manager, a remote execution agent, automated pipelines and much more!
previous
Chroma
next
ClickHouse
Contents
Installation and Setup
Getting API Credentials
Callbacks
Scenario 1: Just an LLM
Scenario 2: Creating an agent with tools
Tips and Next Steps
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/clearml_tracking.html |
f4f1b4513934-0 | .md
.pdf
Reddit
Contents
Installation and Setup
Document Loader
Reddit#
Reddit is an American social news aggregation, content rating, and discussion website.
Installation and Setup#
First, you need to install a python package.
pip install praw
Make a Reddit Application and initialize the loader with with your Reddit API credentials.
Document Loader#
See a usage example.
from langchain.document_loaders import RedditPostsLoader
previous
Rebuff
next
Redis
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/reddit.html |
d52fb252a053-0 | .md
.pdf
OpenSearch
Contents
Installation and Setup
Wrappers
VectorStore
OpenSearch#
This page covers how to use the OpenSearch ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific OpenSearch wrappers.
Installation and Setup#
Install the Python package with pip install opensearch-py
Wrappers#
VectorStore#
There exists a wrapper around OpenSearch vector databases, allowing you to use it as a vectorstore
for semantic search using approximate vector search powered by lucene, nmslib and faiss engines
or using painless scripting and script scoring functions for bruteforce vector search.
To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
For a more detailed walkthrough of the OpenSearch wrapper, see this notebook
previous
OpenAI
next
OpenWeatherMap
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/opensearch.html |
d286724f5b34-0 | .md
.pdf
Pinecone
Contents
Installation and Setup
Vectorstore
Pinecone#
This page covers how to use the Pinecone ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Pinecone wrappers.
Installation and Setup#
Install the Python SDK:
pip install pinecone-client
Vectorstore#
There exists a wrapper around Pinecone indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
from langchain.vectorstores import Pinecone
For a more detailed walkthrough of the Pinecone vectorstore, see this notebook
previous
PGVector
next
PipelineAI
Contents
Installation and Setup
Vectorstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/pinecone.html |
6e185bdfa7ad-0 | .md
.pdf
PromptLayer
Contents
Installation and Setup
LLM
Example
Chat Model
PromptLayer#
PromptLayer
is a devtool that allows you to track, manage, and share your GPT prompt engineering.
It acts as a middleware between your code and OpenAI’s python library, recording all your API requests
and saving relevant metadata for easy exploration and search in the PromptLayer dashboard.
Installation and Setup#
Install the promptlayer python library
pip install promptlayer
Create a PromptLayer account
Create an api token and set it as an environment variable (PROMPTLAYER_API_KEY)
LLM#
from langchain.llms import PromptLayerOpenAI
Example#
To tag your requests, use the argument pl_tags when instantiating the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
To get the PromptLayer request id, use the argument return_pl_id when instantiating the LLM
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
This will add the PromptLayer request ID in the generation_info field of the Generation returned when using .generate or .agenerate
For example:
llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
print("pl request id: ", res[0].generation_info["pl_request_id"])
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. Read more about it here.
This LLM is identical to the OpenAI LLM, except that
all your requests will be logged to your PromptLayer account
you can add pl_tags when instantiating to tag your requests on PromptLayer | https://python.langchain.com/en/latest/integrations/promptlayer.html |
6e185bdfa7ad-1 | you can add pl_tags when instantiating to tag your requests on PromptLayer
you can add return_pl_id when instantiating to return a PromptLayer request id to use while tracking requests.
Chat Model#
from langchain.chat_models import PromptLayerChatOpenAI
See a usage example.
previous
Prediction Guard
next
Psychic
Contents
Installation and Setup
LLM
Example
Chat Model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/promptlayer.html |
33c9581bf620-0 | .md
.pdf
Spreedly
Contents
Installation and Setup
Document Loader
Spreedly#
Spreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.
Installation and Setup#
See setup instructions.
Document Loader#
See a usage example.
from langchain.document_loaders import SpreedlyLoader
previous
spaCy
next
StochasticAI
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/spreedly.html |
2db8b9eeecf5-0 | .md
.pdf
PipelineAI
Contents
Installation and Setup
Wrappers
LLM
PipelineAI#
This page covers how to use the PipelineAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific PipelineAI wrappers.
Installation and Setup#
Install with pip install pipeline-ai
Get a Pipeline Cloud api key and set it as an environment variable (PIPELINE_API_KEY)
Wrappers#
LLM#
There exists a PipelineAI LLM wrapper, which you can access with
from langchain.llms import PipelineAI
previous
Pinecone
next
Prediction Guard
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/pipelineai.html |
50c04797e9c2-0 | .ipynb
.pdf
Ray Serve
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton
Example of deploying and OpenAI chain with custom prompts
Ray Serve#
Ray Serve is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code.
Goal of this notebook#
This notebook shows a simple example of how to deploy an OpenAI chain into production. You can extend it to deploy your own self-hosted models where you can easily define amount of hardware resources (GPUs and CPUs) needed to run your model in production efficiently. Read more about available options including autoscaling in the Ray Serve documentation.
Setup Ray Serve#
Install ray with pip install ray[serve].
General Skeleton#
The general skeleton for deploying a service is the following:
# 0: Import ray serve and request from starlette
from ray import serve
from starlette.requests import Request
# 1: Define a Ray Serve deployment.
@serve.deployment
class LLMServe:
def __init__(self) -> None:
# All the initialization code goes here
pass
async def __call__(self, request: Request) -> str:
# You can parse the request here
# and return a response
return "Hello World"
# 2: Bind the model to deployment
deployment = LLMServe.bind()
# 3: Run the deployment
serve.api.run(deployment)
# Shutdown the deployment
serve.api.shutdown()
Example of deploying and OpenAI chain with custom prompts#
Get an OpenAI API key from here. By running the following code, you will be asked to provide your API key.
from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain | https://python.langchain.com/en/latest/integrations/ray_serve.html |
50c04797e9c2-1 | from langchain.llms import OpenAI
from langchain import PromptTemplate, LLMChain
from getpass import getpass
OPENAI_API_KEY = getpass()
@serve.deployment
class DeployLLM:
def __init__(self):
# We initialize the LLM, template and the chain here
llm = OpenAI(openai_api_key=OPENAI_API_KEY)
template = "Question: {question}\n\nAnswer: Let's think step by step."
prompt = PromptTemplate(template=template, input_variables=["question"])
self.chain = LLMChain(llm=llm, prompt=prompt)
def _run_chain(self, text: str):
return self.chain(text)
async def __call__(self, request: Request):
# 1. Parse the request
text = request.query_params["text"]
# 2. Run the chain
resp = self._run_chain(text)
# 3. Return the response
return resp["text"]
Now we can bind the deployment.
# Bind the model to deployment
deployment = DeployLLM.bind()
We can assign the port number and host when we want to run the deployment.
# Example port number
PORT_NUMBER = 8282
# Run the deployment
serve.api.run(deployment, port=PORT_NUMBER)
Now that service is deployed on port localhost:8282 we can send a post request to get the results back.
import requests
text = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
response = requests.post(f'http://localhost:{PORT_NUMBER}/?text={text}')
print(response.content.decode())
previous
Qdrant
next
Rebuff
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton | https://python.langchain.com/en/latest/integrations/ray_serve.html |
50c04797e9c2-2 | Rebuff
Contents
Goal of this notebook
Setup Ray Serve
General Skeleton
Example of deploying and OpenAI chain with custom prompts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/ray_serve.html |
17e6075dd303-0 | .md
.pdf
Anyscale
Contents
Installation and Setup
Wrappers
LLM
Anyscale#
This page covers how to use the Anyscale ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Anyscale wrappers.
Installation and Setup#
Get an Anyscale Service URL, route and API key and set them as environment variables (ANYSCALE_SERVICE_URL,ANYSCALE_SERVICE_ROUTE, ANYSCALE_SERVICE_TOKEN).
Please see the Anyscale docs for more details.
Wrappers#
LLM#
There exists an Anyscale LLM wrapper, which you can access with
from langchain.llms import Anyscale
previous
Anthropic
next
Apify
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/anyscale.html |
9a679ca8d6ea-0 | .md
.pdf
AWS S3 Directory
Contents
Installation and Setup
Document Loader
AWS S3 Directory#
Amazon Simple Storage Service (Amazon S3) is an object storage service.
AWS S3 Directory
AWS S3 Buckets
Installation and Setup#
pip install boto3
Document Loader#
See a usage example for S3DirectoryLoader.
See a usage example for S3FileLoader.
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader
previous
AtlasDB
next
AZLyrics
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/aws_s3.html |
ab935f7084be-0 | .md
.pdf
Google BigQuery
Contents
Installation and Setup
Document Loader
Google BigQuery#
Google BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.
BigQuery is a part of the Google Cloud Platform.
Installation and Setup#
First, you need to install google-cloud-bigquery python package.
pip install google-cloud-bigquery
Document Loader#
See a usage example.
from langchain.document_loaders import BigQueryLoader
previous
GitBook
next
Google Cloud Storage
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/google_bigquery.html |
d5e7fd99e616-0 | .md
.pdf
Vectara
Contents
Installation and Setup
VectorStore
Vectara#
What is Vectara?
Vectara Overview:
Vectara is developer-first API platform for building conversational search applications
To use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.
You can use Vectara’s indexing API to add documents into Vectara’s index
You can use Vectara’s Search API to query Vectara’s index (which also supports Hybrid search implicitly).
You can use Vectara’s integration with LangChain as a Vector store or using the Retriever abstraction.
Installation and Setup#
To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.
VectorStore#
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores import Vectara
To create an instance of the Vectara vectorstore:
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.
For a more detailed walkthrough of the Vectara wrapper, see one of the two example notebooks:
Chat Over Documents with Vectara
Vectara Text Generation
previous
Unstructured
next
Vespa
Contents
Installation and Setup
VectorStore
By Harrison Chase | https://python.langchain.com/en/latest/integrations/vectara.html |
d5e7fd99e616-1 | Contents
Installation and Setup
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/vectara.html |
b1be27f513a0-0 | .md
.pdf
GooseAI
Contents
Installation and Setup
Wrappers
LLM
GooseAI#
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
Installation and Setup#
Install the Python SDK with pip install openai
Get your GooseAI api key from this link here.
Set the environment variable (GOOSEAI_API_KEY).
import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
Wrappers#
LLM#
There exists an GooseAI LLM wrapper, which you can access with:
from langchain.llms import GooseAI
previous
Google Vertex AI
next
GPT4All
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/gooseai.html |
45a09cce886e-0 | .md
.pdf
AZLyrics
Contents
Installation and Setup
Document Loader
AZLyrics#
AZLyrics is a large, legal, every day growing collection of lyrics.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import AZLyricsLoader
previous
AWS S3 Directory
next
Azure Blob Storage
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/azlyrics.html |
904a171c36d8-0 | .md
.pdf
Vespa
Contents
Installation and Setup
Retriever
Vespa#
Vespa is a fully featured search engine and vector database.
It supports vector search (ANN), lexical search, and search in structured data, all in the same query.
Installation and Setup#
pip install pyvespa
Retriever#
See a usage example.
from langchain.retrievers import VespaRetriever
previous
Vectara
next
Weights & Biases
Contents
Installation and Setup
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/vespa.html |
21316e164d11-0 | .md
.pdf
BiliBili
Contents
Installation and Setup
Document Loader
BiliBili#
Bilibili is one of the most beloved long-form video sites in China.
Installation and Setup#
pip install bilibili-api-python
Document Loader#
See a usage example.
from langchain.document_loaders import BiliBiliLoader
previous
Beam
next
Blackboard
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/bilibili.html |
22cb66a7ec6c-0 | .md
.pdf
Notion DB
Contents
Installation and Setup
Document Loader
Notion DB#
Notion is a collaboration platform with modified Markdown support that integrates kanban
boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management,
and project and task management.
Installation and Setup#
All instructions are in examples below.
Document Loader#
We have two different loaders: NotionDirectoryLoader and NotionDBLoader.
See a usage example for the NotionDirectoryLoader.
from langchain.document_loaders import NotionDirectoryLoader
See a usage example for the NotionDBLoader.
from langchain.document_loaders import NotionDBLoader
previous
NLPCloud
next
Obsidian
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/notion.html |
309b30037131-0 | .md
.pdf
Google Serper
Contents
Setup
Wrappers
Utility
Output
Tool
Google Serper#
This page covers how to use the Serper Google Search API within LangChain. Serper is a low-cost Google Search API that can be used to add answer box, knowledge graph, and organic results data from Google Search.
It is broken into two parts: setup, and then references to the specific Google Serper wrapper.
Setup#
Go to serper.dev to sign up for a free account
Get the api key and set it as an environment variable (SERPER_API_KEY)
Wrappers#
Utility#
There exists a GoogleSerperAPIWrapper utility which wraps this API. To import this utility:
from langchain.utilities import GoogleSerperAPIWrapper
You can use it as part of a Self Ask chain:
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.llms.openai import OpenAI
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
import os
os.environ["SERPER_API_KEY"] = ""
os.environ['OPENAI_API_KEY'] = ""
llm = OpenAI(temperature=0)
search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
)
]
self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
Output#
Entering new AgentExecutor chain...
Yes.
Follow up: Who is the reigning men's U.S. Open champion? | https://python.langchain.com/en/latest/integrations/google_serper.html |
309b30037131-1 | Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Current champions Carlos Alcaraz, 2022 men's singles champion.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain
So the final answer is: El Palmar, Spain
> Finished chain.
'El Palmar, Spain'
For a more detailed walkthrough of this wrapper, see this notebook.
Tool#
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
from langchain.agents import load_tools
tools = load_tools(["google-serper"])
For more information on this, see this page
previous
Google Search
next
Google Vertex AI
Contents
Setup
Wrappers
Utility
Output
Tool
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/google_serper.html |
e5d70337da77-0 | .md
.pdf
Replicate
Contents
Installation and Setup
Calling a model
Replicate#
This page covers how to run models on Replicate within LangChain.
Installation and Setup#
Create a Replicate account. Get your API key and set it as an environment variable (REPLICATE_API_TOKEN)
Install the Replicate python client with pip install replicate
Calling a model#
Find a model on the Replicate explore page, and then paste in the model name and version in this format: owner-name/model-name:version
For example, for this dolly model, click on the API tab. The model name/version would be: "replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"
Only the model param is required, but any other model parameters can also be passed in with the format input={model_param: value, ...}
For example, if we were running stable diffusion and wanted to change the image dimensions:
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
Note that only the first output of a model will be returned.
From here, we can initialize our model:
llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")
And run it:
prompt = """ | https://python.langchain.com/en/latest/integrations/replicate.html |
e5d70337da77-1 | And run it:
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call Stable Diffusion:
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'})
image_output = text2image("A cat riding a motorcycle by Picasso")
previous
Redis
next
Roam
Contents
Installation and Setup
Calling a model
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/replicate.html |
209f962c1641-0 | .md
.pdf
Amazon Bedrock
Contents
Installation and Setup
LLM
Text Embedding Models
Amazon Bedrock#
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
Installation and Setup#
pip install boto3
LLM#
See a usage example.
from langchain import Bedrock
Text Embedding Models#
See a usage example.
from langchain.embeddings import BedrockEmbeddings
previous
Aleph Alpha
next
AnalyticDB
Contents
Installation and Setup
LLM
Text Embedding Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/amazon_bedrock.html |
edb0b2b07dad-0 | .md
.pdf
Docugami
Contents
Installation and Setup
Document Loader
Docugami#
Docugami converts business documents into a Document XML Knowledge Graph, generating forests
of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and
structural characteristics of various chunks in the document as an XML tree.
Installation and Setup#
pip install lxml
Document Loader#
See a usage example.
from langchain.document_loaders import DocugamiLoader
previous
Discord
next
DuckDB
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/docugami.html |
4aa38d552035-0 | .md
.pdf
Aleph Alpha
Contents
Installation and Setup
LLM
Text Embedding Models
Aleph Alpha#
Aleph Alpha was founded in 2019 with the mission to research and build the foundational technology for an era of strong AI. The team of international scientists, engineers, and innovators researches, develops, and deploys transformative AI like large language and multimodal models and runs the fastest European commercial AI cluster.
The Luminous series is a family of large language models.
Installation and Setup#
pip install aleph-alpha-client
You have to create a new token. Please, see instructions.
from getpass import getpass
ALEPH_ALPHA_API_KEY = getpass()
LLM#
See a usage example.
from langchain.llms import AlephAlpha
Text Embedding Models#
See a usage example.
from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbedding
previous
Airbyte
next
Amazon Bedrock
Contents
Installation and Setup
LLM
Text Embedding Models
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/aleph_alpha.html |
670d039116ed-0 | .md
.pdf
PGVector
Contents
Installation
Setup
Wrappers
VectorStore
Usage
PGVector#
This page covers how to use the Postgres PGVector ecosystem within LangChain
It is broken into two parts: installation and setup, and then references to specific PGVector wrappers.
Installation#
Install the Python package with pip install pgvector
Setup#
The first step is to create a database with the pgvector extension installed.
Follow the steps at PGVector Installation Steps to install the database and the extension. The docker image is the easiest way to get started.
Wrappers#
VectorStore#
There exists a wrapper around Postgres vector databases, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
from langchain.vectorstores.pgvector import PGVector
Usage#
For a more detailed walkthrough of the PGVector Wrapper, see this notebook
previous
Petals
next
Pinecone
Contents
Installation
Setup
Wrappers
VectorStore
Usage
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/pgvector.html |
14323647d92a-0 | .md
.pdf
IMSDb
Contents
Installation and Setup
Document Loader
IMSDb#
IMSDb is the Internet Movie Script Database.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import IMSDbLoader
previous
iFixit
next
Jina
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/imsdb.html |
0bb40a889ac1-0 | .ipynb
.pdf
WhyLabs
Contents
Installation and Setup
Callbacks
WhyLabs#
WhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:
Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library.
Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance.
Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here.
Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines.
Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment!
Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.
Installation and Setup#
!pip install langkit -q
Make sure to set the required API keys and config required to send telemetry to WhyLabs:
WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-up
Org and Dataset https://docs.whylabs.ai/docs/whylabs-onboarding
OpenAI: https://platform.openai.com/account/api-keys
Then you can set them like this:
import os
os.environ["OPENAI_API_KEY"] = ""
os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""
os.environ["WHYLABS_API_KEY"] = "" | https://python.langchain.com/en/latest/integrations/whylabs_profiling.html |
0bb40a889ac1-1 | os.environ["WHYLABS_API_KEY"] = ""
Note: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.
Callbacks#
Here’s a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.
from langchain.callbacks import WhyLabsCallbackHandler
from langchain.llms import OpenAI
whylabs = WhyLabsCallbackHandler.from_params()
llm = OpenAI(temperature=0, callbacks=[whylabs])
result = llm.generate(["Hello, World!"])
print(result)
generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}
result = llm.generate(
[
"Can you give me 3 SSNs so I can understand the format?",
"Can you give me 3 fake email addresses?",
"Can you give me 3 fake US mailing addresses?",
]
)
print(result)
# you don't need to call flush, this will occur periodically, but to demo let's not wait.
whylabs.flush() | https://python.langchain.com/en/latest/integrations/whylabs_profiling.html |
0bb40a889ac1-2 | whylabs.flush()
generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. [email protected]\n2. [email protected]\n3. [email protected]', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}
whylabs.close()
previous
WhatsApp
next
Wikipedia
Contents
Installation and Setup
Callbacks
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/whylabs_profiling.html |
5000df48766a-0 | .md
.pdf
Airbyte
Contents
Installation and Setup
Document Loader
Airbyte#
Airbyte is a data integration platform for ELT pipelines from APIs,
databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.
Installation and Setup#
This instruction shows how to load any source from Airbyte into a local JSON file that can be read in as a document.
Prerequisites:
Have docker desktop installed.
Steps:
Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git.
Switch into Airbyte directory - cd airbyte.
Start Airbyte - docker compose up.
In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password.
Setup any source you wish.
Set destination as Local JSON, with specified destination path - lets say /json_data. Set up a manual sync.
Run the connection.
To see what files are created, navigate to: file:///tmp/airbyte_local/.
Document Loader#
See a usage example.
from langchain.document_loaders import AirbyteJSONLoader
previous
Aim
next
Aleph Alpha
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/airbyte.html |
ff68a9e0d7dd-0 | .md
.pdf
Petals
Contents
Installation and Setup
Wrappers
LLM
Petals#
This page covers how to use the Petals ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Petals wrappers.
Installation and Setup#
Install with pip install petals
Get a Hugging Face api key and set it as an environment variable (HUGGINGFACE_API_KEY)
Wrappers#
LLM#
There exists an Petals LLM wrapper, which you can access with
from langchain.llms import Petals
previous
OpenWeatherMap
next
PGVector
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/petals.html |
da387bd26ff0-0 | .md
.pdf
GPT4All
Contents
Installation and Setup
Usage
GPT4All
Model File
GPT4All#
This page covers how to use the GPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.
Installation and Setup#
Install the Python package with pip install pyllamacpp
Download a GPT4All model and place it in your desired directory
Usage#
GPT4All#
To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model’s configuration.
from langchain.llms import GPT4All
# Instantiate the model. Callbacks support token-wise streaming
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Generate text
response = model("Once upon a time, ")
You can also customize the generation parameters, such as n_predict, temp, top_p, top_k, and others.
To stream the model’s predictions, add in a CallbackManager.
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler
callbacks = [StreamingStdOutCallbackHandler()]
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Generate text. Tokens are streamed through the callback manager.
model("Once upon a time, ", callbacks=callbacks)
Model File#
You can find links to model file downloads in the pyllamacpp repository.
For a more detailed walkthrough of this, see this notebook
previous
GooseAI
next
Graphsignal
Contents | https://python.langchain.com/en/latest/integrations/gpt4all.html |
da387bd26ff0-1 | previous
GooseAI
next
Graphsignal
Contents
Installation and Setup
Usage
GPT4All
Model File
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/gpt4all.html |
95bb20c57c79-0 | .md
.pdf
DuckDB
Contents
Installation and Setup
Document Loader
DuckDB#
DuckDB is an in-process SQL OLAP database management system.
Installation and Setup#
First, you need to install duckdb python package.
pip install duckdb
Document Loader#
See a usage example.
from langchain.document_loaders import DuckDBLoader
previous
Docugami
next
Elasticsearch
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/duckdb.html |
67ff7eeab145-0 | .md
.pdf
DeepInfra
Contents
Installation and Setup
Available Models
Wrappers
LLM
DeepInfra#
This page covers how to use the DeepInfra ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DeepInfra wrappers.
Installation and Setup#
Get your DeepInfra api key from this link here.
Get an DeepInfra api key and set it as an environment variable (DEEPINFRA_API_TOKEN)
Available Models#
DeepInfra provides a range of Open Source LLMs ready for deployment.
You can list supported models here.
google/flan* models can be viewed here.
You can view a list of request and response parameters here
Wrappers#
LLM#
There exists an DeepInfra LLM wrapper, which you can access with
from langchain.llms import DeepInfra
previous
Databricks
next
Deep Lake
Contents
Installation and Setup
Available Models
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/deepinfra.html |
9fd769a0b5ba-0 | .md
.pdf
Unstructured
Contents
Installation and Setup
Wrappers
Data Loaders
Unstructured#
The unstructured package from
Unstructured.IO extracts clean text from raw source documents like
PDFs and Word documents.
This page covers how to use the unstructured
ecosystem within LangChain.
Installation and Setup#
If you are using a loader that runs locally, use the following steps to get unstructured and
its dependencies running locally.
Install the Python SDK with pip install "unstructured[local-inference]"
Install the following system dependencies if they are not already available on your system.
Depending on what document types you’re parsing, you may not need all of these.
libmagic-dev (filetype detection)
poppler-utils (images and PDFs)
tesseract-ocr(images and PDFs)
libreoffice (MS Office docs)
pandoc (EPUBs)
If you want to get up and running with less set up, you can
simply run pip install unstructured and use UnstructuredAPIFileLoader or
UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API.
Note that currently (as of 1 May 2023) the Unstructured API is open, but it will soon require
an API. The Unstructured documentation page will have
instructions on how to generate an API key once they’re available. Check out the instructions
here
if you’d like to self-host the Unstructured API or run it locally.
Wrappers#
Data Loaders#
The primary unstructured wrappers within langchain are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the langchain.document_loaders module.
from langchain.document_loaders import UnstructuredFileLoader | https://python.langchain.com/en/latest/integrations/unstructured.html |
9fd769a0b5ba-1 | from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("state_of_the_union.txt")
loader.load()
If you instantiate the loader with UnstructuredFileLoader(mode="elements"), the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.
previous
Twitter
next
Vectara
Contents
Installation and Setup
Wrappers
Data Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/unstructured.html |
866b83fb2d5b-0 | .md
.pdf
Redis
Contents
Installation and Setup
Wrappers
Cache
Standard Cache
Semantic Cache
VectorStore
Retriever
Memory
Vector Store Retriever Memory
Chat Message History Memory
Redis#
This page covers how to use the Redis ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Redis wrappers.
Installation and Setup#
Install the Redis Python SDK with pip install redis
Wrappers#
Cache#
The Cache wrapper allows for Redis to be used as a remote, low-latency, in-memory cache for LLM prompts and responses.
Standard Cache#
The standard cache is the Redis bread & butter of use case in production for both open source and enterprise users globally.
To import this cache:
from langchain.cache import RedisCache
To use this cache with your LLMs:
import langchain
import redis
redis_client = redis.Redis.from_url(...)
langchain.llm_cache = RedisCache(redis_client)
Semantic Cache#
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends Redis as both a cache and a vectorstore.
To import this cache:
from langchain.cache import RedisSemanticCache
To use this cache with your LLMs:
import langchain
import redis
# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
redis_url = "redis://localhost:6379"
langchain.llm_cache = RedisSemanticCache(
embedding=FakeEmbeddings(),
redis_url=redis_url
)
VectorStore#
The vectorstore wrapper turns Redis into a low-latency vector database for semantic search or LLM content retrieval.
To import this vectorstore:
from langchain.vectorstores import Redis | https://python.langchain.com/en/latest/integrations/redis.html |
866b83fb2d5b-1 | To import this vectorstore:
from langchain.vectorstores import Redis
For a more detailed walkthrough of the Redis vectorstore wrapper, see this notebook.
Retriever#
The Redis vector store retriever wrapper generalizes the vectorstore class to perform low-latency document retrieval. To create the retriever, simply call .as_retriever() on the base vectorstore class.
Memory#
Redis can be used to persist LLM conversations.
Vector Store Retriever Memory#
For a more detailed walkthrough of the VectorStoreRetrieverMemory wrapper, see this notebook.
Chat Message History Memory#
For a detailed example of Redis to cache conversation message history, see this notebook.
previous
Reddit
next
Replicate
Contents
Installation and Setup
Wrappers
Cache
Standard Cache
Semantic Cache
VectorStore
Retriever
Memory
Vector Store Retriever Memory
Chat Message History Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/redis.html |
3760b606964f-0 | .md
.pdf
Momento
Contents
Installation and Setup
Cache
Memory
Chat Message History Memory
Momento#
Momento Cache is the world’s first truly serverless caching service. It provides instant elasticity, scale-to-zero
capability, and blazing-fast performance.
With Momento Cache, you grab the SDK, you get an end point, input a few lines into your code, and you’re off and running.
This page covers how to use the Momento ecosystem within LangChain.
Installation and Setup#
Sign up for a free account here and get an auth token
Install the Momento Python SDK with pip install momento
Cache#
The Cache wrapper allows for Momento to be used as a serverless, distributed, low-latency cache for LLM prompts and responses.
The standard cache is the go-to use case for Momento users in any environment.
Import the cache as follows:
from langchain.cache import MomentoCache
And set up like so:
from datetime import timedelta
from momento import CacheClient, Configurations, CredentialProvider
import langchain
# Instantiate the Momento client
cache_client = CacheClient(
Configurations.Laptop.v1(),
CredentialProvider.from_environment_variable("MOMENTO_AUTH_TOKEN"),
default_ttl=timedelta(days=1))
# Choose a Momento cache name of your choice
cache_name = "langchain"
# Instantiate the LLM cache
langchain.llm_cache = MomentoCache(cache_client, cache_name)
Memory#
Momento can be used as a distributed memory store for LLMs.
Chat Message History Memory#
See this notebook for a walkthrough of how to use Momento as a memory store for chat message history.
previous
Modern Treasury
next
MyScale
Contents
Installation and Setup
Cache
Memory
Chat Message History Memory
By Harrison Chase | https://python.langchain.com/en/latest/integrations/momento.html |
3760b606964f-1 | Installation and Setup
Cache
Memory
Chat Message History Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/momento.html |
b2a385741be4-0 | .md
.pdf
AtlasDB
Contents
Installation and Setup
Wrappers
VectorStore
AtlasDB#
This page covers how to use Nomic’s Atlas ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Atlas wrappers.
Installation and Setup#
Install the Python package with pip install nomic
Nomic is also included in langchains poetry extras poetry install -E all
Wrappers#
VectorStore#
There exists a wrapper around the Atlas neural database, allowing you to use it as a vectorstore.
This vectorstore also gives you full access to the underlying AtlasProject object, which will allow you to use the full range of Atlas map interactions, such as bulk tagging and automatic topic modeling.
Please see the Atlas docs for more detailed information.
To import this vectorstore:
from langchain.vectorstores import AtlasDB
For a more detailed walkthrough of the AtlasDB wrapper, see this notebook
previous
Arxiv
next
AWS S3 Directory
Contents
Installation and Setup
Wrappers
VectorStore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/atlas.html |
55c0288e78f1-0 | .md
.pdf
Annoy
Contents
Installation and Setup
Vectorstore
Annoy#
Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
Installation and Setup#
pip install annoy
Vectorstore#
See a usage example.
from langchain.vectorstores import Annoy
previous
AnalyticDB
next
Anthropic
Contents
Installation and Setup
Vectorstore
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/annoy.html |
2183470ffcb6-0 | .md
.pdf
Git
Contents
Installation and Setup
Document Loader
Git#
Git is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.
Installation and Setup#
First, you need to install GitPython python package.
pip install GitPython
Document Loader#
See a usage example.
from langchain.document_loaders import GitLoader
previous
ForefrontAI
next
GitBook
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/git.html |
ecb771c30492-0 | .md
.pdf
Discord
Contents
Installation and Setup
Document Loader
Discord#
Discord is a VoIP and instant messaging social platform. Users have the ability to communicate
with voice calls, video calls, text messaging, media and files in private chats or as part of communities called
“servers”. A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.
Installation and Setup#
pip install pandas
Follow these steps to download your Discord data:
Go to your User Settings
Then go to Privacy and Safety
Head over to the Request all of my Data and click on Request Data button
It might take 30 days for you to receive your data. You’ll receive an email at the address which is registered
with Discord. That email will have a download button using which you would be able to download your personal Discord data.
Document Loader#
See a usage example.
from langchain.document_loaders import DiscordChatLoader
previous
Diffbot
next
Docugami
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/discord.html |
8c74e19d6b15-0 | .md
.pdf
Argilla
Contents
Installation and Setup
Tracking
Argilla#
Argilla is an open-source data curation platform for LLMs.
Using Argilla, everyone can build robust language models through faster data curation
using both human and machine feedback. We provide support for each step in the MLOps cycle,
from data labeling to model monitoring.
Installation and Setup#
First, you’ll need to install the argilla Python package as follows:
pip install argilla --upgrade
If you already have an Argilla Server running, then you’re good to go; but if
you don’t, follow the next steps to install it.
If you don’t you can refer to Argilla - 🚀 Quickstart to deploy Argilla either on HuggingFace Spaces, locally, or on a server.
Tracking#
See a usage example of ArgillaCallbackHandler.
from langchain.callbacks import ArgillaCallbackHandler
previous
Apify
next
Arxiv
Contents
Installation and Setup
Tracking
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/argilla.html |
940f24fe2bea-0 | .md
.pdf
Blackboard
Contents
Installation and Setup
Document Loader
Blackboard#
Blackboard Learn (previously the Blackboard Learning Management System)
is a web-based virtual learning environment and learning management system developed by Blackboard Inc.
The software features course management, customizable open architecture, and scalable design that allows
integration with student information systems and authentication protocols. It may be installed on local servers,
hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services.
Its main purposes are stated to include the addition of online elements to courses traditionally delivered
face-to-face and development of completely online courses with few or no face-to-face meetings.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import BlackboardLoader
previous
BiliBili
next
Cassandra
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/blackboard.html |
6c508e2258a1-0 | .md
.pdf
Elasticsearch
Contents
Installation and Setup
Retriever
Elasticsearch#
Elasticsearch is a distributed, RESTful search and analytics engine.
It provides a distributed, multi-tenant-capable full-text search engine with an HTTP web interface and schema-free
JSON documents.
Installation and Setup#
pip install elasticsearch
Retriever#
In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others.
The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
See a usage example.
from langchain.retrievers import ElasticSearchBM25Retriever
previous
DuckDB
next
EverNote
Contents
Installation and Setup
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/elasticsearch.html |
a5d9a43dd57e-0 | .md
.pdf
C Transformers
Contents
Installation and Setup
Wrappers
LLM
C Transformers#
This page covers how to use the C Transformers library within LangChain.
It is broken into two parts: installation and setup, and then references to specific C Transformers wrappers.
Installation and Setup#
Install the Python package with pip install ctransformers
Download a supported GGML model (see Supported Models)
Wrappers#
LLM#
There exists a CTransformers LLM wrapper, which you can access with:
from langchain.llms import CTransformers
It provides a unified interface for all models:
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')
print(llm('AI is going to'))
If you are getting illegal instruction error, try using lib='avx' or lib='basic':
llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2', lib='avx')
It can be used with models hosted on the Hugging Face Hub:
llm = CTransformers(model='marella/gpt-2-ggml')
If a model repo has multiple model files (.bin files), specify a model file using:
llm = CTransformers(model='marella/gpt-2-ggml', model_file='ggml-model.bin')
Additional parameters can be passed using the config parameter:
config = {'max_new_tokens': 256, 'repetition_penalty': 1.1}
llm = CTransformers(model='marella/gpt-2-ggml', config=config)
See Documentation for a list of available parameters.
For a more detailed walkthrough of this, see this notebook.
previous
Confluence
next
Databerry
Contents
Installation and Setup | https://python.langchain.com/en/latest/integrations/ctransformers.html |
a5d9a43dd57e-1 | previous
Confluence
next
Databerry
Contents
Installation and Setup
Wrappers
LLM
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/ctransformers.html |
7ca03af6018f-0 | .md
.pdf
Yeager.ai
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
Yeager.ai#
This page covers how to use Yeager.ai to generate LangChain tools and agents.
What is Yeager.ai?#
Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools.
It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.
yAgents#
Low code generative agent designed to help you build, prototype, and deploy Langchain tools with ease.
How to use?#
pip install yeagerai-agent
yeagerai-agent
Go to http://127.0.0.1:7860
This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab “Settings”.
OPENAI_API_KEY=<your_openai_api_key_here>
We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.
Creating and Executing Tools with yAgents#
yAgents makes it easy to create and execute AI-powered tools. Here’s a brief overview of the process:
Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool’s purpose and functionality. For example:
create a tool that returns the n-th prime number | https://python.langchain.com/en/latest/integrations/yeagerai.html |
7ca03af6018f-1 | create a tool that returns the n-th prime number
Load the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkit
Execute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime number
You can see a video of how it works here.
As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.
For more information, see yAgents’ Github or our docs
previous
Writer
next
YouTube
Contents
What is Yeager.ai?
yAgents
How to use?
Creating and Executing Tools with yAgents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/yeagerai.html |
5977b02c46d5-0 | .md
.pdf
GitBook
Contents
Installation and Setup
Document Loader
GitBook#
GitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import GitbookLoader
previous
Git
next
Google BigQuery
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/gitbook.html |
150663456bcf-0 | .md
.pdf
Cohere
Contents
Installation and Setup
LLM
Text Embedding Model
Retriever
Cohere#
Cohere is a Canadian startup that provides natural language processing models
that help companies improve human-machine interactions.
Installation and Setup#
Install the Python SDK :
pip install cohere
Get a Cohere api key and set it as an environment variable (COHERE_API_KEY)
LLM#
There exists an Cohere LLM wrapper, which you can access with
See a usage example.
from langchain.llms import Cohere
Text Embedding Model#
There exists an Cohere Embedding model, which you can access with
from langchain.embeddings import CohereEmbeddings
For a more detailed walkthrough of this, see this notebook
Retriever#
See a usage example.
from langchain.retrievers.document_compressors import CohereRerank
previous
ClickHouse
next
College Confidential
Contents
Installation and Setup
LLM
Text Embedding Model
Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/cohere.html |
7502059d7d95-0 | .md
.pdf
College Confidential
Contents
Installation and Setup
Document Loader
College Confidential#
College Confidential gives information on 3,800+ colleges and universities.
Installation and Setup#
There isn’t any special setup for it.
Document Loader#
See a usage example.
from langchain.document_loaders import CollegeConfidentialLoader
previous
Cohere
next
Comet
Contents
Installation and Setup
Document Loader
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/college_confidential.html |
8b66f5f42e28-0 | .md
.pdf
Metal
Contents
What is Metal?
Quick start
Metal#
This page covers how to use Metal within LangChain.
What is Metal?#
Metal is a managed retrieval & memory platform built for production. Easily index your data into Metal and run semantic search and retrieval on it.
Quick start#
Get started by creating a Metal account.
Then, you can easily take advantage of the MetalRetriever class to start retrieving your data for semantic search, prompting context, etc. This class takes a Metal instance and a dictionary of parameters to pass to the Metal API.
from langchain.retrievers import MetalRetriever
from metal_sdk.metal import Metal
metal = Metal("API_KEY", "CLIENT_ID", "INDEX_ID");
retriever = MetalRetriever(metal, params={"limit": 2})
docs = retriever.get_relevant_documents("search term")
previous
MediaWikiDump
next
Microsoft OneDrive
Contents
What is Metal?
Quick start
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 07, 2023. | https://python.langchain.com/en/latest/integrations/metal.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.